Businesses increasingly embrace the moving of multiple applications to the cloud using containers and utilize Kubernetes for orchestration, according to Zettaset.
However, findings also confirm that organizations are inadequately securing the data stored in these new cloud-native environments and continue to leverage existing legacy security technology as a solution.
Businesses are faced with significant IT-related challenges as they strive to keep up with the demands of digital transformation. Now more than ever to maintain a competitive edge, companies are rapidly developing and deploying new applications.
Companies must invest in high performance data protection
The adoption of containers, microservices and Kubernetes for orchestration play a significant role in these digital acceleration efforts. And yet, while many companies are eager to adopt these new cloud-native technologies, research shows that companies are not accurately weighing the benefits of enterprise IT innovation with inherent security risks.
“Our goal with this research was to determine whether enterprise organizations who are actively transitioning from DevOps to DevSecOps are investing in proper security and data protection technology. And while findings confirm that companies are in fact making the strategic decision to shift towards cloud-native environments, they are currently ill-equipped to secure their company’s most critical asset: data.
“Companies must invest in high performance data protection so as it to secure critical information in real-time across any architecture.”
- Organizations are embracing the cloud and cloud-native technologies: 39% of respondents have multiple production applications deployed on Kubernetes. But, companies are still struggling with the complexities associated with these environments and how to secure deployments.
- Cloud providers offer considerable influence with regards to Kubernetes distribution: A little over half of those surveyed are using open source Kubernetes available through the Cloud Native Computing Foundation (CNCF). And 34.7% of respondents are using a Kubernetes offering managed by an existing cloud provider such as AWS, Google, Azure, and IBM.
- Kubernetes security best practices have yet to be identified: 60.1% of respondents believe there is a lack of proper education and awareness of the proper ways to mitigate risk associated with storing data in cloud-native environments. And 43.2% are confident that multiple vulnerable attack surfaces are created with the introduction of Kubernetes.
- Companies have yet to evolve their existing security strategies: Almost half of respondents (46.5%) are using traditional data encryption tools to protect their data stored in Kubernetes clusters. Over 20% are finding that these traditional tools are not performing as desired.
“The results of our research substantiate the notion that enterprise organizations are moving forward with cloud-native technologies such as containers and Kubernetes. What we were most interested in discovering was how these companies are approaching security,” said Charles Kolodgy, security strategist and author of the report.
“Companies overall are concerned about the wide range of potential attack surfaces. They are applying legacy solutions but those are not designed to handle today’s ever-evolving threat landscape, especially as data is being moved off-premise to cloud-based environments.
“To stay ahead of what’s to come, companies must look to solutions purposely built to operate in a Kubernetes environment.”
Driven by a strong curiosity to know how computers and computer programs are made, how they work, and how safe they are, Sheila A. Berta, Head of Security Research at Dreamlab Technologies, has been interested in cybersecurity since her early teens.
For the last several years, she has been conducting investigations in a variety of information security areas like hardware hacking, car hacking, wireless security, malware and – more recently – Docker, Kubernetes and cloud security.
“At the moment everything tends to migrate to containerized, serverless and/or cloud environments with a microservices focus, so DevOps and other IT professionals have been forced to learn how to implement and work with these infrastructures,” she explained her more recent research interests.
“The attack and defense techniques that can be applied in these environments are completely different from the techniques applied in ‘traditional’ architectures, so it’s very important that security professionals now acquire the necessary skills to competently protect these modern infrastructures.”
One of the ways they can achieve this is to attend a training course on the subject.
Virtual trainings through HITBSecTrain
During HITBCyberWeek, which is scheduled to start on November 15, Berta’s colleague Sol Ozzan will hold an online workshop focused on Docker and Kubernetes defense that will serve as a preview for a 2-day virtual training courses that the two will conduct through HITBSecTrain in February next year.
“Our Attack and Defense on Docker, Swarm and Kubernetes training at HITBSecTrain will provide attendees with the practical knowledge they need to analyze and secure containerized & Kubernetes-orchestrated environments,” Berta told Help Net Security.
“Our trainings have a lot of hands-on laboratories. We start with the Docker fundamentals and then jump into the labs with Docker Black Box and White Box analysis, as well as defense on containers and Docker images. At the end of the first day, we focus on Swarm (official Docker orchestrator) with a variety of practices in attack and defense.”
The second day is fully dedicated to Kubernetes. They start with the fundamentals of this technology and then dive into the hands-on with Black Box, Gray Box, and White Box analysis. Sophisticated attack techniques will be explained, as well as advanced security features that can be implemented in this famous orchestrator.
This is not the first time she has held a container environment-related training – she also did it at Black Hat USA 2020. But, as can be expected, they are continuously updating the materials: they have added lately more attack techniques in different Docker and Kubernetes components, such as the Docker Registry and Kubernetes Kubelet, and more open source tools that can be used to analyze and secure these infrastructures.
She also couldn’t help but speak highly of another 2-day training course that two other Dreamlab Technologies colleagues are set to hold in February.
“I had the pleasure of seeing how the trainers built the materials for the Attacking and Securing Industrial Control Systems (ICS) course and I have to say that it is the most practical training on ICS hacking I have ever seen. It even has practices for air-gap bypass techniques,” she noted.
“I believe practical experience is very important when it comes to this kind of topics. We have prepared a realistic ICS environment that students will access throughout the course to perform all the exploitation techniques explained by the trainers.”
Qualys announced Container Runtime Security, which provides runtime defense capabilities for containerized applications. Qualys Runtime Container Security This new approach instruments an extremely lightweight snippet of Qualys code into the container image, enabling policy-driven monitoring, detection and blocking of container behavior at runtime. This capability eliminates the need for cumbersome management of sidecar and privileged containers by security solutions that are difficult to manage and administer on host nodes and don’t work in container-as-a-service environments. … More
The post Qualys Container Runtime Security: Defense for containerized applications appeared first on Help Net Security.
Aqua Security announced a suite of new Kubernetes-native security capabilities, providing a holistic approach to securing applications that run on Kubernetes, across the development, deployment, and runtime phases of the application lifecycle.
The company also announced significant new features in its Cloud Security Posture Management (CSPM) solution. These new capabilities, which will be generally available next week, are integrated into Aqua’s cloud native security platform, covering the spectrum of deployment options across containers, VMs and serverless functions.
In a recent research note, Gartner asserts that “Kubernetes’ inherent complexity often leads to outdated versions and misconfiguration by organizations, making clusters susceptible to compromise. Though some security mechanisms are included by design, K8s by itself is not a security offering, and security settings aren’t always enabled by default.
“Protecting a K8s cluster is a significant undertaking, requiring both substantial understanding of the underlying technology and engineering expertise to configure it all.”
Aqua’s new Kubernetes security solution addresses the complexity and short supply of engineering expertise required to configure Kubernetes infrastructure effectively and automatically, by introducing KSPM – Kubernetes Security Posture Management – a coherent set of policies and controls to automate secure configuration and compliance.
Additionally, Aqua now offers new agentless runtime protection capabilities, that use Kubernetes itself to deploy security controls into pods, leveraging and extending the native capabilities built into Kubernetes.
“The large-scale use of Kubernetes, as well as developments in the threat landscape, necessitate a comprehensive approach to securing applications that goes beyond generic benchmarks, providing seamless workload protection in runtime,” noted Amir Jerbi, CTO and co-founder at Aqua.
“We’ve been working with our enterprise customers to make it easier to securely deploy and seamlessly protect applications that run on Kubernetes, while complementing our existing capabilities in Kubernetes and container security.”
Aqua KSPM includes several new and innovative capabilities:
- Kubernetes assurance policies: With more than 20 predefined rules available out of the box, and the ability to use OPA (Open Policy Agent) Rego rules, these policies define which Pods may be deployed in a cluster based on multiple parameters. These policies work in conjunction with Aqua’s Image Assurance Policies to control which containers run in your cluster based on both their image contents and configuration, as well as Pod configuration.
- Kubernetes roles and subjects assessment: Reduces administration overhead of maintaining Kubernetes user and service account privileges by identifying risks and suggesting their remediation. This addresses least privilege security gaps while diminishing the need for Kubernetes security expertise, which is in short supply.
These new capabilities join Aqua’s existing certified CIS benchmark testing (powered by Aqua’s open source Kube-Bench), and penetration testing (powered by Aqua’s open source Kube-Hunter), providing enterprises with comprehensive insight into the security posture of their Kubernetes cluster, and the ability to address gaps efficiently with no need for specialized expertise.
With its new Kubernetes Runtime Protection module, Aqua introduces a new model for deploying security runtime controls in a Kubernetes cluster, complementing its existing container runtime security deployment options.
This new model leverages Kubernetes Admission Controllers to deploy and govern sidecar containers within Pods, in a similar fashion to other cloud native tools such as Envoy.
This mode of deployment enables greater automation of deployment and does not require any privileges on the node’s host OS, while providing dynamic runtime controls such as container drift prevention, behavioral controls, and network controls.
In addition to the extensions to Kubernetes security capabilities, this latest release adds many new features and enhancements including:
- New customizable dashboard: Provides a clear view of the overall security status of your cloud native environment with dedicated widgets for key areas, such as host and image/container security, and drag & drop design. The new dashboard supports Aqua’s RBAC model to filter viewable data according to user role permissions.
- AWS Bottlerocket support: The new AWS operating system for running containers is now available as a protected workload platform.
- Auto-remediation for Azure in Aqua CSPM: Aqua CSPM now provides remediation advice and auto-remediation options for Azure cloud services, previously available for AWS.
- New compliance reports in Aqua CSPM: Aqua CSPM now provides out-of-the-box compliance reports for additional compliance reporting, including SOC 2 Type 2, ISO27001, NIST SP 800-53, and NIST CSF.
- VM security: Now allows flexible scan scheduling, scan history review, and malware scans on mounted NFS shares.
COVID-19 has upended the way we do all things. In this interview, Mike Bursell, Chief Security Architect at Red Hat, shares his view of which IT security changes are ongoing and which changes enterprises should prepare for in the coming months and years.
How has the pandemic affected enterprise edge computing strategies? Has the massive shift to remote work created problems when it comes to scaling hybrid cloud environments?
The pandemic has caused major shifts in the ways we live and work, from video calls to increased use of streaming services, forcing businesses to embrace new ways to be flexible, scalable, efficient and cost-saving. It has also exposed weaknesses in the network architectures that underpin many companies, as they struggle to cope with remote working and increased traffic. We’re therefore seeing both an accelerated shift to edge computing, which takes place at or near the physical location of either the end-user or the data source, and further interest in hybrid cloud strategies which don’t require as much on-site staff time.
Changing your processes to make the most of this without damaging your security posture requires thought and, frankly, new policies and procedures. Get your legal and risk teams involved – but don’t forget your HR department. HR has a definite role to play in allowing your key employees to continue to do the job you need them to do, but in ways that are consonant with the new world we’re living in.
However, don’t assume that these will be – or should be! – short-term changes. If you can find more efficient or effective ways of managing your infrastructure, without compromising your risk profile while also satisfying new staff expectations, then everyone wins.
What would you say are the most significant challenges for enterprises that want to build secure and future-proof application infrastructures?
One challenge is that although some of the technology is now quite mature, the processes for managing it aren’t, yet. And by that I don’t just mean technical processes, but how you arrange your teams and culture to suit new ways of managing, deploying, and (critically) automating your infrastructure. Add to this new technologies such as confidential computing (using Trusted Execution Environments to protect data in use), and there is still a lot of change.
The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way.
People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature.
How has cloud-native management evolved in the last few years and what are the current security stumbling blocks?
One of the areas of both maturity and immaturity is in terms of workload isolation. We can think of three types: workload from workload isolation (preventing workloads from interfering with each other – type 1); host from workload isolation (preventing workloads from interfering with the host – type 2); workload from host isolation (preventing hosts from interfering with workloads – type 3).
The technologies for types 1 and 2 are really quite mature now, with containers and virtual machines combining a variety of hardware and software techniques such virtualization, cgroups and SELinux. On the other hand, protecting workloads from malicious or compromised hosts is much more difficult, meaning that regulators – and sensible enterprises! – are unwilling to have some workloads execute in the public cloud.
Technologies like secure and measured boot, combined with TPM capabilities by projects such as Keylime (which is fully open source) are beginning to address this, and we can expect major improvement as confidential computing (and open source projects like Enarx which uses TEEs) matures.
In the past few years, we’ve seen a huge interest in Kubernetes deployments. What common mistakes are organizations making along the way? How can they be addressed?
One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. It requires IT teams to fully understand the architecture that is made up of applications and network layers.
Once implemented, businesses must also maintain the ecosystem in parallel to any software running on top. When it comes to implementation, businesses are advised to follow open standards – those decided upon by the open source Kubernetes community as a whole, rather than a specific vendor. This will prevent teams from running into unexpected roadblocks, and helps to ensure a smooth learning curve for new team members.
Another mistake organizations can make is ignoring small but important details, like the backwards compatibility of Kubernetes with older versions is very important. It’s easy to overlook the fact that these may not have important security updates that can transfer, so IT teams must be mindful when merging code across versions, and check regularly for available updates.
Open source remains one of the building blocks of enterprise IT. What’s your take on the future of open source code in large business networks?
Open source is here to stay, and that’s a good thing, not least for security. The more security experts there are to look at code, the more likely that bugs will be found and fixed. Of course, security experts are short on the ground, and busy, so it’s important that large enterprises make a commitment to getting involved with open source and committing resources to it.
Another issue that people also get confused by thinking that just because a project is open source, it’s ready to use. There’s a difference between an open source project and an enterprise product which is based on that project. In the latter case, you get all the benefits of testing, patching, upgrading, vulnerability processes, version management and support. In the former case, you need to manage everything yourself – including ensuring that you have sufficient expertise in house to cope with any issues that come up.
At the same time, DevOps automation continues to expand in scope and complexity with more and more processes becoming automated, and more involved technologies like Kubernetes continuing to gain strong traction. While it has improved some year-over-year, most organizations are still struggling with implementing and maintaining automation.
COVID-19 has led many to reconsider their on-prem infrastructure strategy
58% of respondents saying that due to the pandemic, they are planning on moving some infrastructure to the cloud with 17% of respondents planning to move their entire stack to the cloud.
In total, about 75% of respondents said that they are moving at least part of their infrastructure to the cloud as a result of the COVID-19 pandemic, representing a dramatic shift in strategy and further adoption towards the cloud.
DevOps budgets are going up in 2020
74% of respondents are expecting an increase and more than half are expecting their budgets to increase by 25% or more.
Organizations are continuing to invest heavily in their DevOps budgets as the effect of DevOps on developer velocity and site reliability continues to be better understood.
Most companies still struggling with commit-to-production automation
If your organization is struggling with complete commit-to-production automation, you’re not alone. Automation proves to be elusive as less than 5% of respondents claimed that all of their company’s DevOps processes are automated from Git commit to code running in production.
52% of respondents have less than 50% of their organization’s DevOps process automated from Git commit to production. This is down from last year’s survey, where 66% of respondents had less than 50% of their processes automated. This represents the continued trend away from manual processes as organizations build out their DevOps automation.
Kubernetes continues to build momentum
Kubernetes continues to build momentum, with most thinking that it will be used on more than half of new projects by the end of 2020. 73% of respondents said that they believe that by the end of 2020, more than half of new projects will use Kubernetes.
In 2019’s survey, 54% of respondents said that Kubernetes would be used in more than half of all projects by the end of the year. Clearly, Kubernetes adoption is continuing to accelerate. 75% of respondents said that they have either already adopted Kubernetes or are planning to adopt Kubernetes soon.
67% of DevOps engineers spend over a quarter of their time just fixing bugs
67% of respondents said that they spend 25% or more of their time fixing bugs in their automated systems, while 35% of respondents spend 50% or more of their time fixing bugs in their automated systems.
This highlights the importance of choosing a well-architected DevOps automation stack, as the platform you use can have a massive impact on the amount of time lost to bug fixing.
A malicious cryptocurrency miner and DDoS worm that has been targeting Docker systems for months now also steals Amazon Web Services (AWS) credentials.
The original threat
TeamTNT’s “calling card” appears when the worm first runs on the target installation:
- Scan for open Docker daemon ports (i.e., misconfigured Docker containers)
- Create an Alpine Linux container to host the coinminer and DDoS bot
- Search for and delete other coin miners and malware
- Configure the firewall to allow ports that will be used by the other components, sinkhole other domain names, exfiltrate sensitive information from the host machine
- Download additional utilities, a log cleaner, and a tool that attackers may use to pivot to other devices in the network (via SSH)
- Download and install the coinminer
- Collect system information and send it to the C&C server
The latest iteration has been equipped with new capabilities, Cado Security researchers found.
The worm still scans for open Docker APIs, then spins up Docker images and install itself in a new container, but it now also searches for exploitable Kubernetes systems and files containing AWS credentials and configuration details – just in case the compromised systems run on the AWS infrastructure.
The code to steal these files is relatively straightforward, the researchers note, and they expect other worms to copy this new ability soon.
But are the attackers using the stolen credentials or are they selling them? The researchers tried to find out by sending “canary” AWS keys to TeamTNT’s servers, but they haven’t been used yet.
“This indicates that TeamTNT either manually assess and use the credentials, or any automation they may have created isn’t currently functioning,” they concluded.
Nevertheless, they urge businesses to:
- Identify systems that are storing AWS credential files and delete them if they aren’t needed
- Use firewall rules to limit any access to Docker APIs
- Review network traffic for connections to mining pools or using the Stratum mining protocol
- Review any connections sending the AWS Credentials file over HTTP
The HPE Container Platform is the industry’s first enterprise-grade container platform designed to support both cloud-native and non-cloud-native applications using 100 percent open source Kubernetes – running on bare-metal or virtual machines (VMs), in the data center, on any public cloud, or at the edge.
In addition, HPE is introducing new professional services to ensure faster time-to-value and several new reference configurations for data-intensive application workloads such as AI, machine learning, deep learning (DL), data analytics, edge computing, and Internet of Things (IoT).
Reducing cost and complexity
Many organizations started their container journey with stateless workloads that are easier to transition to a cloud-native microservices architecture. However, the majority of business applications today are monolithic, stateful, and non-cloud-native workloads that live throughout the enterprise. Organizations seek to modernize and containerize these applications without significant refactoring – while ensuring production-grade security and persistent data storage.
While some early on-premises Kubernetes deployments used containers with VMs, this approach is no longer necessary. Running containers on bare-metal provides significant advantages to organizations seeking to modernize and run containers at scale in the enterprise. These include: reducing unnecessary overhead, avoiding lock-in with a proprietary virtualization format, and eliminating “vTax” licensing costs.
The HPE Container Platform reduces cost and complexity by running containers on bare-metal, while providing the flexibility to deploy in VMs or cloud instances. This allows businesses to embrace a hybrid cloud or multi-cloud approach to deploying Kubernetes with enterprise-class security, performance, and reliability. Organizations seeking greater cost savings, efficiency, utilization, and application performance can eliminate the need for virtualization and expensive hypervisor licenses, by running containers directly on bare-metal infrastructure.
HPE Container Platform advantages
Additional advantages of the HPE Container Platform and bare-metal containers include:
- Speed. Deploying and running containerized applications on bare-metal is faster. There’s no need to start up the guest operating system (OS) of the VM, including a full boot process; this speeds development, operations, and time-to-market.
- Reduction in cost and resources. Since each VM has its own guest OS, eliminating it reduces the RAM, storage and CPU resources—and the associated data center costs—required to sustain it.
- Elimination of an orchestration layer. There’s no need to have a management framework for a virtualized environment and a Kubernetes orchestration environment for containers.
- Increased density per hardware platform. Run more containers on a given physical host than VMs, by eliminating multiple copies of guest OSes and their requirements for CPU, memory, and storage.
- Better performance for applications that require direct access to hardware. Analytics and artificial intelligence (AI) workloads with machine learning (ML) algorithms require heavy computation to train the ML models; these applications will deliver faster results and higher throughput on bare-metal.
Built on proven innovations from HPE’s recent acquisitions of BlueData and MapR, the HPE Container Platform is an integrated turnkey solution with BlueData software as the container management control plane and the MapR distributed file system as the unified data fabric for persistent storage.
“With the HPE Container Platform, GM Financial has deployed containerized applications for machine learning and data analytics running in production in a multi-tenant hybrid cloud architecture, for multiple use cases from credit risk analysis to improving customer experience,” said Lynn Calvo, AVP of Emerging Data Technology at GM Financial.
“The next phase of enterprise container adoption requires breakthrough innovation and a new approach,” said Kumar Sreekanti, senior vice president and chief technology officer of Hybrid IT at HPE. “Our HPE Container Platform software brings agility and speed to accelerate application development with Kubernetes at scale. Customers benefit from greater cost efficiency by running containers on bare-metal, with the flexibility to run on VMs or in a cloud environment.”
“We’re leveraging the innovations of the open source Kubernetes community, together with our own software innovations for multi-tenancy, security, and persistent data storage with containers,” continued Sreekanti. “The new HPE Container Platform is designed to help customers as they expand their containerization deployments, for multiple large-scale Kubernetes clusters with use cases ranging from machine learning to CI / CD pipelines.”
Commitment to open source
HPE is actively engaged in the Cloud Native Computing Foundation and Kubernetes community, with open source projects such as KubeDirector. A key component of the HPE Container Platform, KubeDirector provides the ability to run non-cloud-native monolithic applications (i.e., stateful applications with persistent storage) on Kubernetes. HPE’s recent acquisition of Scytale for cloud-native security underscores its commitment to the open source ecosystem, with ongoing contributions to open source projects, including Secure Production Identity Framework for Everyone (SPIFFE) and SPIFFE Runtime Environment (SPIRE).
When Jordan Liggitt at Google posted details of a serious Kubernetes vulnerability in November 2018, it was a wake-up call for security teams ignoring the risks that came with adopting a cloud-native infrastructure without putting security at the heart of the whole endeavor.
For such a significant milestone in Kubernetes history, the vulnerability didn’t have a suitably alarming name comparable to the likes of Spectre, Heartbleed or the Linux Kernel’s recent SACK Panic; it was simply a CVE post on the Kubernetes GitHub repo. But CVE-2018-1002105 was a privilege escalation vulnerability that enabled a normal user to steal data from any container in a cluster. It even enabled an unauthorized user to create an unapproved service on Kubernetes, run the service in a default configuration, and inject malicious code into that service.
The first approach took advantage of pod exec/attach/portforward privileges to make a user a cluster-admin. The second method was possible as a bad actor could use the Kubernetes API server – essentially the front-end of Kubernetes through which all other components interact – to establish a connection to a back-end server and use the same connection. Crucially, this meant that the attacker could use the connection’s established TLS credentials to create their own service instances.
This was perfect privilege escalation in action, as any requests were made through an established and trusted connection and therefore didn’t appear in either the Kubernetes API server audit logs or server log. While they were theoretically visible in kubelet or aggregated API server logs, they wouldn’t appear any different to an authorized request, blending in seamlessly with the constant stream of requests.
Of course, open source versions of Kubernetes were patched quickly for this vulnerability and cloud service providers sprang into action to patch their managed services, but this was the first time that Kubernetes had experienced a critical vulnerability. It was also, as Jordan Liggett stated in his CVE post at the time, notable for there being no way to detect who and how often the vulnerability had been used.
Unfortunately, this CVE also highlighted the unprepared state of many traditional enterprise IT organizations when it came to their applications that were housed in containers. Remediation required an immediate update to Kubernetes clusters, but Kubernetes isn’t backward-compatible with every previous release. This meant some organizations faced two issues: not only did they have to provision new Kubernetes clusters but they also found their applications didn’t work any more.
The rise of containers for apps, with their clever use of namespaces and cgroups which respectively limit what system resources you can see and use, has ushered in an era of hyper-scale and flexibility for enterprises.
According to Sumo Logic’s Continuous Intelligence Report, which is derived from 2,000 companies, the use of Docker containers in production has grown from 18 per cent in 2016 to almost 30 per cent in 2019 among enterprises. Docker owes much of its success to Kubernetes. The platform built from Google’s Borg project and open-sourced for all to use has orchestrated out much of the management complexity of handling thousands of containers. However, it has created security challenges.
Since this high-profile vulnerability, other Kubernetes flaws have been found, each exposing undiscovered gaps in how companies apply security to their container-based applications. There has been the runc container exploit in February, which allowed a malicious container to overwrite the runc binary and gain root on the container host. This was followed by an irritating – but limited by authorization – Denial of Service (DoS) that exploited patch requests.
The most recent vulnerability, uncovered by StackRox, was another DoS attack which hit Kubernetes API server. This made use of an exploit in the parsing of YAML manifests to kube-apiserver. As kube-apiserver doesn’t perform an input validation on manifests or apply a manifest file size limit, it made the server susceptible to the unfunny Billion Laughs DoS attack.
Container security requires continuous security
Among the lessons to be learned from the growing number of issues discovered over time in Kubernetes is that there will be more, and they will be discoverable across the different stages of the software development lifecycle (SDLC). In other words, Kubernetes is just like any other new, critical infrastructure component introduced in an application development environment.
Discovering and addressing these new class of vulnerabilities will require continuous security monitoring across development, test and production environments. It will also require collaboration and integrated workflow between previously siloed teams from initial planning and coding all the way through to testing and into production. Many use the term DevSecOps to describe this evolution of the DevOps transitions which often accompany modern application development using containers/orchestration/etc.
Choosing a common analytics platform for your DevSecOps projects can result in substantial operational savings while also providing the fabric to deal with the unique security challenges of containers. For example, integrated insight across the tool chain and technology stacks can be leveraged to pinpoint infected nodes, run compliance checks to pick up anonymous access to the API and apply run-time defenses for containers. In many cases, container security will automatically detect and stop unusual binaries that are being exploited, for instance, attempts to access the API from an application within a compromised container.
To build, run and secure containerized apps in this DevSecOps model requires a new approach to the core visibility, detection, and investigation workflows that make up the defense. DevSecOps requires tools that supply deep visibility into your systems and can identify, investigate and prioritize security and compliance threats across the SDLC. This level of observability comes from integrated, large-scale real-time analytics that is aggregated from both structured and unstructured data from across all the systems in the complex SDLC tool chain.
While straightforward as a strategy, often the execution of this approach is frustrated by fragmented analysis tools across logs, metrics, tracing, application performance, code analysis, integrated testing, runtime testing, CI/CD, etc. This often leaves teams managing several products to connect the dots between, for example, low-level Kubernetes issues and the potential impacts they will have to security at the application layer. Traditional analytics tools often lack the basic scale and ingestion capacity to integrate the data, but equally important they also lack the native understanding of these modern data sources required to generate insight in the data without excessive programming or human analysis.
Even when adopting a smaller set of application development and testing platforms, with the scale and insight required, DevSecOps needs capabilities specifically designed for the container/orchestration problem space. First, from a discoverability standpoint the platform must provide multiple views on the data to provide situational awareness. For example, providing visual representations of both the low-level infrastructure as well as the higher-level service view helps connect both the macro and micro security picture. Also, from an observability standpoint, the system must integrate with the wide array of tools that facilitate various aspects of collection and detection (such as Prometheus, Fluentd and Falco).
Metadata in Kubernetes, in the form of labels and annotations, is used for organizing and understanding the way containers are orchestrated, so leveraging this to gain security insight with automated detection and tagging is an important capability. Finally, the system needs to assimilate the insight and data from the various discrete container security systems to provide a comprehensive view.
All of these dimensions of integration (data, analytics, workflow) demand continuous security intelligence applied across the SDLC. Securing containers and orchestration, and more broadly the entire modern application stack, cannot suffer from the delays in both planning and production of connecting dozens of fragmented analytics tools.
At a higher level, securing the modern application stack also can’t depend on the delays of integrating data, analysis, and conclusions across the functional owners of these many tools (security, IT operations, application teams, DevOps, etc). Continuous intelligence from an integrated analytics platform can break these silos and can be a critical element of securing containerized applications in a DevSecOps model.
Five security best practices for DevOps and development professionals managing Kubernetes deployments have been introduced by Portshift.
Integrating these security measures into the early stages of the CI/CD pipeline will assist organizations in the detection of security issues earlier, allowing security teams to remediate issues quickly.
Kubernetes as the market leader
The use of containers continues to rise in popularity in test and production environments, increasing demand for a means to manage and orchestrate them. Of all the orchestration tools, Kubernetes (K8s) has emerged as the market leader in cloud-native environments.
Unfortunately, Kubernetes is not as adept at security as it is at orchestration. It is therefore essential to use the right deployment architecture and security best practices for all deployments.
Kubernetes security challenges
However, while Kubernetes has risen in popularity, it has also come with its own set of security issues, increasing the risk of attacks on applications.
Because Kubernetes deployments consist of many different components (including: the Kubernetes’ master and nodes, the server that hosts Kubernetes, the container runtime used Kubernetes, networking layers within the cluster and the applications that run inside containers hosted on Kubernetes), securing Kubernetes requires DevOps/developers to address the security challenges associated with each of these components.
Five security best practices
- Authorization: Kubernetes offers several authorization methods which are not mutually exclusive. It is recommended to use RBAC and ABAC in combination with Kubernetes where RBAC policies will be forced first, while ABAC policies complement this with finer filtering.
- Pod security: Since each pod contains a set of one or more containers, it is essential to control their communication. This is done by using Pod Security Policies which are cluster-level resources that control security sensitive aspects of the pod specification.
- Container security: Kubernetes includes basic workload security primitives related to container security. However, if apps, or the environment, are not configured correctly, the containers become vulnerable to attacks.
- Migration to production: As companies move more deployments into production, that migration increases the volume of vulnerable workloads at runtime. This issue can be overcome by applying the solutions described above, as well as making sure that your organization maintains a healthy DevOps/DevSecOps culture.
- Securing CI/CD pipelines on Kubernetes: Running CI/CD on Kubernetes allows for the build-out, testing, and deployment of K8‘s environments that can quickly be scaled as needed. Security must be baked at the CI/CD process because otherwise attackers can gain access at a later point and infect your code or environment. Leverage a security solution that acts as a protection layer for K8s and provides visibility both at the app and cluster levels.
“As the leading orchestration platform, Kubernetes is in active use at AWS, Google Cloud Platform, and Azure,“ said Zohar Kaufman, VP, R&D, Portshift. “With the right security infrastructure in place, it is set to change the way applications are deployed in the cloud with unprecedented efficiency and agility.”
The Cloud Native Computing Foundation is inviting bug hunters to search for and report vulnerabilities affecting Kubernetes. Offered bug bounties range between $100 to $10,000.
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.
It was designed by Google but has been open sourced and handed over to the Cloud Native Computing Foundation to continue its maintenance and has become a community project.
The Kubernetes bug bounty program
The program will be managed by HackerOne and reports will be investigated by a set of community volunteers.
Initially open just to invited researchers, the bug bounty program has now been opened to all who want to try their hand at discovering vulnerabilities in the 82 assets in scope, which span core Kubernetes and add-ons, Kubernetes-owned core dependencies, non-core components, and the Kubernetes infrastructure, including the main website and the Kubernetes build and test infrastructure.
A more granular list can be perused here.
“The bug bounty scope covers code from the main Kubernetes organizations on GitHub, as well as continuous integration, release, and documentation artifacts. Basically, most content you’d think of as ‘core’ Kubernetes, included at https://github.com/kubernetes, is in scope,” Google’s Maya Kaczorowski and Tim Allclair explained.
“We’re particularly interested in cluster attacks, such as privilege escalations, authentication bugs, and remote code execution in the kubelet or API server. Any information leak about a workload, or unexpected permission changes is also of interest. Stepping back from the cluster admin’s view of the world, you’re also encouraged to look at the Kubernetes supply chain, including the build and release processes, which would allow any unauthorized access to commits, or the ability to publish unauthorized artifacts.”
Kaczorowski also pointed out that this program is a bit different from standard bug bounties, as there isn’t a ‘live’ environment for bug hunters to test.
“Kubernetes can be configured in many different ways, and we’re looking for bugs that affect any of those (except when existing configuration options could mitigate the bug),” she added.
Introducing our Storage Environment O3 Building on the last three blogs (vision, key tenets/concepts, real-world use case) in the Open Hybrid Architecture series, we now want to take a deeper dive into our Storage Environment, especially O3 (the molecular formula for Ozone). First, we want to look back at the annals of history of Hadoop. […]