Red Hat announced Red Hat Enterprise Linux 8.3, the latest version of its enterprise Linux platform. Generally available in the coming weeks, Red Hat Enterprise Linux 8.3 fuses the stability required by IT operations teams with cloud-native innovation, providing a more stable platform for next-generation enterprise applications.
Already an established backbone for mission-critical computing, the latest enhancements to the platform bring new performance profiles and automation, reinforced security capabilities and updated container tools.
According to Red Hat’s Enterprise Open Source Report, 63% of organizations surveyed have a hybrid cloud architecture today, while more than half of those who do not plan to implement one in the next two years.
Linux is often a linchpin in hybrid cloud deployments, providing a common operating environment that spans from bare-metal servers to public cloud deployments.
As the world’s leading enterprise Linux platform, Red Hat Enterprise Linux 8.3 is designed to deliver an enterprise-ready, standardized platform for this next wave of computing, helping enterprises transform digitally while retaining existing datacenter investments.
Optimized innovation, made more manageable
As hybrid cloud computing grows, the ability to manage and optimize the underlying Linux platforms at scale becomes critical. Additionally, IT organizations need to be able to lower the barrier of entry for Linux, enabling systems administrators or IT managers who may be unfamiliar with the operating system to still effectively oversee deployments.
To support these needs, Red Hat Enterprise Linux 8.3 further expands Red Hat System Roles which provide prescriptive and automated ways for operating system-specific configurations. Newly supported roles now include kernel settings, log settings, SAP HANA, SAP NetWeaver and management.
System Roles help make common and complex Red Hat Enterprise Linux configurations more consistent, repeatable and accessible to a wider range of skill sets, even across very large IT estates.
Red Hat Enterprise Linux 8.3 refines the platform’s performance with updates to Tuned, a set of pre-configured, architecture-aware performance profiles. Tuned enables IT teams to lean on Red Hat’s extensive multiarchitecture expertise in maximizing performance across hardware architectures.
Additionally, Red Hat Insights remains available by default to supported Red Hat Enterprise Linux systems. Red Hat Insights is designed to deliver Red Hat’s decades of Linux know-how as a proactive monitoring and remediation service and now includes administrator views specifically for SAP HANA deployments.
Extending Linux’s secure footprint
Delivering a more secure platform remains front-and-center in Red Hat Enterprise Linux 8.3, which adds new Secure Content Automation Protocol (SCAP) profiles for the Center for Internet Security (CIS) Benchmark and the Health Insurance Portability and Accountability Act (HIPAA). This helps IT organizations more efficiently and compliantly configure systems to meet a broader range of security best practices and industry/governmental standards.
System Roles for security-centric tasks have also been expanded to include identity management configuration, certificate management and Network-Bound Disk Encryption (NBDE).
These system roles provide a way for administrators to more rapidly and efficiently extend the security of their systems while reducing the risk of human error when implementing these configurations.
A foundation for cloud-native innovation
Red Hat Enterprise Linux 8.3 builds on the initial innovations of Red Hat Enterprise Linux 8 with updates to Application Streams, where developer frameworks, databases, container tools and other resources are separated from the core foundational components of the operating system.
Newly supported developer tools include Node.js 14, Ruby 2.7 and many others, with earlier versions still accessible to maintain production consistency.
For building cloud-native applications, the platform adds updated container images for Buildah and Skopeo, intended to help reduce the friction between developer and operations teams in container deployments by making it easier to build and consume containerized applications wherever needed.
Podman 2.0 is also included, bringing a new REST API that enables users to retain container code and tooling that previously relied on the Docker Container Engine with greater programmatic control.
Red Hat announced the introduction of its enterprise customer advocacy program, Red Hat Accelerators. Drawing on its extensive community-building history, the customer-facing program serves as a natural extension of its customer-focused approach to both its open source and enterprise product portfolio, enacted to form deeper and more engaging relationships with its customers.
Red Hat Accelerators offers peer-to-peer networking with like-minded Red Hat practitioners to foster deeper learning and broaden exposure to various products, technologies, use cases and issues along with group discussions on features and functions.
Along with access to Red Hat experts and business leaders, participants receive exclusive, early access to products, which helps develop their domain expertise and build credibility.
The program also contributes to individual’s personal and professional development, expanding their knowledge and skills to stay relevant and up to date in the industry, amongst peers and with the market.
Red Hat Accelerators launched as an established way to serve passionate enterprise customers who wanted to share and engage with each other around relevant IT issues, remedies and solutions.
Additionally, Red Hat experts, product teams and even business leadership were included as a way for customers to voice their critical product and solution feedback, provide the business and product teams valuable insights, and help influence and improve development.
The program also integrates a continued focus on customer advocacy and maintaining an environment that encourages public advocacy through blogging, social media, speaking engagements and more.
The program draws on Red Hat’s long-established, community-driven business model of helping to form, grow and develop uncounted open source projects into solutions that customers need.
According to a report by Forrester, Credible Empathetic Content Wins Over Elusive B2B Buyers, 87% of technology buyers at global enterprises said it’s important for vendors to understand their business, industry or market conditions, and 82% want vendors to understand what’s most important to their job.
The program harnesses the power of real-world enterprise customer voices and gives them a direct line to help influence product development, so Red Hat can meet and exceed customer expectations.
Red Hat practitioner enthusiasts looking to make their mark in the community, provide direct feedback on Red Hat products to product owners and business leaders, elevate their status through connecting with like-minded enthusiasts, and join an elite program can apply to the program now.
Candidates should not only have primary expertise in Red Hat’s platform products and know the value of a complete, multi-technology solution, but also have an affinity for the Red Hat brand.
Applicants should be hands-on practitioners willing to spend their time sharing their passion for Red Hat, looking to prioritize and uphold Red Hat’s open source and community standards of operation.
Chris Wright, senior vice president and chief technology officer, Red Hat: “As open source champions and community builders, Red Hat is always interested in hearing from members within its communities, and the Red Hat Accelerators community is no exception.
“In my experience collaborating with the Accelerators, I’ve found that substantial innovation comes from engaging real world customers and being truly receptive to their feedback. Plus, customers appreciate the continuous ability to share commentary on offerings and even influence products.
“With Accelerators, we cherish the opportunity to pull in some of the most passionate enterprise customers in their industries to the Red Hat fold.”
Will Darton, service manager – Enterprise UNIX Engineering, Navy Federal Credit Union: “Most importantly, Red Hat Accelerators connects some of the most easy-going, fun-loving, intelligent, open source experts in the industry.
“There is unparalleled access to key Red Hatters, cutting edge information, technical sessions and online tools to help you grow your open source brand and ecosystem. In 2019, I was able to attend Summit as an Accelerator, which brought more access to events and sessions, and it was my favorite Summit of any that I’ve attended.”
Red Hat Enterprise Linux has further solidified itself as a platform of choice for users requiring more secure computing, with Red Hat Enterprise Linux 7.6 achieving Common Criteria Certification as well as Commercial Solutions for Classified (CSfC) Status.
These validations show Red Hat’s commitment to supporting customers that use the world’s leading enterprise Linux platform for critical workloads in classified and sensitive deployment scenarios.
For Common Criteria, Red Hat Enterprise Linux 7.6 was certified by the National Information Assurance Partnership (NIAP), with testing and validation completed by Acumen Security, a U.S. government-accredited laboratory. The platform was tested and validated against the Common Criteria Standard for Information Security Evaluation (ISO/IEC 15408) against version 4.2.1 of the NIAP General Purpose Operating System Protection Profile and is the latest Red Hat Enterprise Linux version to appear on the NIAP Product Compliant List.
Additionally, Red Hat Enterprise Linux 7.6 is now an approved TLS Protected Server component for Commercial Solutions for Classified (CSfC) solutions and is included in the CSfC TLS Protected Servers Components List. This program, established by the National Security Agency (NSA), enables commercial products to be used in layered solutions protecting National Security System (NSS) data.
Red Hat Enterprise Linux and Evaluation Assurance Levels (EAL)
Previously, Red Hat Enterprise Linux operating systems were certified at EAL4+. The treaty that enables countries to recognize certifications across borders now includes a new Common Criteria Recognition Arrangement that only recognizes up to EAL2. This treaty also rewrote Protection Profiles across products to be very specific about individual product requirements, documentation and testing procedures. It is now expected that a solution either meets the Protection Profile exactly or does not.
In the previous EAL system, the number (EAL2, EAL4, etc.) distinguished the degree of rigor applied to meeting open-ended requirements. This revised certification is designed to be more predictable and better suited to an operating system with frequent minor releases like Red Hat Enterprise Linux, with future platform certifications intended to be aligned with this certification method.
COVID-19 has upended the way we do all things. In this interview, Mike Bursell, Chief Security Architect at Red Hat, shares his view of which IT security changes are ongoing and which changes enterprises should prepare for in the coming months and years.
How has the pandemic affected enterprise edge computing strategies? Has the massive shift to remote work created problems when it comes to scaling hybrid cloud environments?
The pandemic has caused major shifts in the ways we live and work, from video calls to increased use of streaming services, forcing businesses to embrace new ways to be flexible, scalable, efficient and cost-saving. It has also exposed weaknesses in the network architectures that underpin many companies, as they struggle to cope with remote working and increased traffic. We’re therefore seeing both an accelerated shift to edge computing, which takes place at or near the physical location of either the end-user or the data source, and further interest in hybrid cloud strategies which don’t require as much on-site staff time.
Changing your processes to make the most of this without damaging your security posture requires thought and, frankly, new policies and procedures. Get your legal and risk teams involved – but don’t forget your HR department. HR has a definite role to play in allowing your key employees to continue to do the job you need them to do, but in ways that are consonant with the new world we’re living in.
However, don’t assume that these will be – or should be! – short-term changes. If you can find more efficient or effective ways of managing your infrastructure, without compromising your risk profile while also satisfying new staff expectations, then everyone wins.
What would you say are the most significant challenges for enterprises that want to build secure and future-proof application infrastructures?
One challenge is that although some of the technology is now quite mature, the processes for managing it aren’t, yet. And by that I don’t just mean technical processes, but how you arrange your teams and culture to suit new ways of managing, deploying, and (critically) automating your infrastructure. Add to this new technologies such as confidential computing (using Trusted Execution Environments to protect data in use), and there is still a lot of change.
The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way.
People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature.
How has cloud-native management evolved in the last few years and what are the current security stumbling blocks?
One of the areas of both maturity and immaturity is in terms of workload isolation. We can think of three types: workload from workload isolation (preventing workloads from interfering with each other – type 1); host from workload isolation (preventing workloads from interfering with the host – type 2); workload from host isolation (preventing hosts from interfering with workloads – type 3).
The technologies for types 1 and 2 are really quite mature now, with containers and virtual machines combining a variety of hardware and software techniques such virtualization, cgroups and SELinux. On the other hand, protecting workloads from malicious or compromised hosts is much more difficult, meaning that regulators – and sensible enterprises! – are unwilling to have some workloads execute in the public cloud.
Technologies like secure and measured boot, combined with TPM capabilities by projects such as Keylime (which is fully open source) are beginning to address this, and we can expect major improvement as confidential computing (and open source projects like Enarx which uses TEEs) matures.
In the past few years, we’ve seen a huge interest in Kubernetes deployments. What common mistakes are organizations making along the way? How can they be addressed?
One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. It requires IT teams to fully understand the architecture that is made up of applications and network layers.
Once implemented, businesses must also maintain the ecosystem in parallel to any software running on top. When it comes to implementation, businesses are advised to follow open standards – those decided upon by the open source Kubernetes community as a whole, rather than a specific vendor. This will prevent teams from running into unexpected roadblocks, and helps to ensure a smooth learning curve for new team members.
Another mistake organizations can make is ignoring small but important details, like the backwards compatibility of Kubernetes with older versions is very important. It’s easy to overlook the fact that these may not have important security updates that can transfer, so IT teams must be mindful when merging code across versions, and check regularly for available updates.
Open source remains one of the building blocks of enterprise IT. What’s your take on the future of open source code in large business networks?
Open source is here to stay, and that’s a good thing, not least for security. The more security experts there are to look at code, the more likely that bugs will be found and fixed. Of course, security experts are short on the ground, and busy, so it’s important that large enterprises make a commitment to getting involved with open source and committing resources to it.
Another issue that people also get confused by thinking that just because a project is open source, it’s ready to use. There’s a difference between an open source project and an enterprise product which is based on that project. In the latter case, you get all the benefits of testing, patching, upgrading, vulnerability processes, version management and support. In the former case, you need to manage everything yourself – including ensuring that you have sufficient expertise in house to cope with any issues that come up.
The Red Hat Marketplace is a one-stop-shop to find, try, buy, deploy and manage enterprise applications across an organization’s hybrid IT infrastructure, including on-premises and multicloud environments.
A private, personalized marketplace experience is also available with Red Hat Marketplace Select at an additional cost for enterprises that want additional control and governance with curated software for more efficiency and scale that is pre-approved for that particular enterprise.
Red Hat Marketplace and Red Hat Marketplace Select, operated by IBM, deliver an ecosystem of software from a range of independent software vendors (ISVs) built on Red Hat OpenShift to provide clients with modern, consistent solution discovery, trial, purchase and deployment. Red Hat OpenShift allows for the portability of mission-critical workloads across secured hybrid cloud environments with certified enterprise software that can help companies avoid vendor lock-in.
For companies building cloud-native infrastructure and applications, Red Hat Marketplace is an essential destination for unlocking the value of cloud investments, designed to minimize the barriers facing global organizations as they accelerate innovation. A growing ecosystem of ISVs has embraced the marketplace because it offers them an efficient, vendor-neutral, and data-driven channel for selling and supporting products in enterprise accounts.
The growing list of more than 50 commercial products available for purchase includes leading solutions across 12 different categories—including AI/ML, Database, Monitoring, Security, Storage, Big Data, Developer tools, and more—from ISVs such as Anchore, Cockroach Labs, CognitiveScale, Couchbase, Dynatrace, KubeMQ, MemSQL, MongoDB, and StorageOS.
All products are certified for Red Hat OpenShift and offered with commercial support. Built on the open Kubernetes Operator Framework, they can run on OpenShift like a cloud service, with capabilities like automated install and upgrade, backup, failover and recovery. With one of the largest commercial collections of portable, managed software built on open standards, Red Hat Marketplace is designed to help solve client challenges for hybrid, multicloud environments with features that are purpose-built for DevOps teams, buyers, IT leaders, and CIOs.
New power in the customer’s control
As organizations operate within hybrid cloud environments, they are increasingly concerned about governance and control of the applications running in those environments. To address this concern, the private version—Red Hat Marketplace Select—allows clients to not only provide their teams with easy access to curated, pre-approved software, but also to track usage and spending by departments of all the software deployed across hybrid cloud environments.
Marketplace customers are finding specific and strategic ways to take advantage of the marketplace. Anthem Inc. is pioneering personalized, predictive, and preventative solutions through efforts that include models enabled by AI. To accomplish their mission, they require a hybrid cloud platform that allows for secured data transfer between multiple parties. Anthem has been working closely with CognitiveScale, one of the ISVs on Red Hat Marketplace, and is now ready to move into the next phase by collaborating with Red Hat to create one of the first customized marketplaces for themselves through Red Hat Marketplace Select.
Leveraging the power of Red Hat OpenShift
With automated deployment, Red Hat Marketplace makes software instantly available for deployment on any Red Hat OpenShift cluster. Red Hat OpenShift is the industry’s most comprehensive enterprise Kubernetes platform, enabling portable, cloud-native software to run as a managed service by embedding operational expertise alongside the software itself.
Software programs available through Red Hat Marketplace can be deployed across the open hybrid cloud and operate in any environment with minimal set-up and overhead, making management at scale easy. With the integration of the enterprise-grade Kubernetes capabilities within Red Hat OpenShift, organizations can achieve build-once, run-anywhere portability across hybrid cloud platforms.
“We believe that removing the operational barriers to deploy and manage new tools and technologies can help organizations become more agile in hybrid multicloud environments. The software available on Red Hat Marketplace is tested, certified and supported on Red Hat OpenShift to enable built-in management logic and streamline implementation processes. This helps customers run faster with automated deployments while enjoying the improved scalability, security, and orchestration capabilities of Kubernetes-native infrastructure,” said Lars Herrmann, senior director, Technology Partnerships, Red Hat.
Domino Data Lab announced Domino 4.3, adding support for the popular Red Hat OpenShift distribution of Kubernetes to make it easier for its customers to scale data science workloads on any platform. 4.3 also improves Domino’s model monitoring capabilities, and extends its IT security features for enterprises via new reporting capabilities.
Domino offers a data science management platform that centralizes predictive analytics and machine learning (ML) research and development based on an open ecosystem that lets data scientists choose their preferred tools and algorithms while reducing the burden on IT.
“Large, sophisticated data science organizations demand flexibility in how they build and deploy their data science stacks. Adding Red Hat OpenShift to our wide variety of deployment options gives customers even more flexibility to run on almost any cloud provider or on their own on-prem hardware,” said Nick Elprin, co-founder and CEO at Domino Data Lab.
“We’re obsessed with delivering enterprise-grade security, control, reliability, and observability in a central platform that helps our many Fortune 100 customers unleash the power of data science. We continue to focus, with this release, on helping them accelerate and confidently manage their demanding data science operations.”
Expanded Elastic scaling with Red Hat OpenShift Kubernetes support
Kubernetes (K8s) is quickly becoming the IT standard for flexible containerized application orchestration across clusters with the ability to automatically deploy, scale capacity up and down on demand, and manage production workloads.
Red Hat OpenShift Kubernetes Engine, popular with IT teams, offers an attractive Kubernetes option for many customers since it can run on virtually all major cloud providers, as well as on-premise deployments.
With this release, Domino can now take advantage of intelligent Kubernetes orchestration on OpenShift clusters for efficient management and smart utilization of computing resources.
Rapidly scaling containerized workloads is particularly important as the demand for high-powered CPUs, GPUs and RAM can spike dramatically when training models or engineering features, and then quickly scale down once completed.
For organizations that have invested in large, centralized Kubernetes clusters to improve hardware utilization across a large pool of users and application workloads, Domino now supports multi-tenant Kubernetes clusters so a dedicated cluster for installation is not required.
Domino Model Monitor (DMM) enhancements
Domino Model Monitor (DMM), introduced in June 2020, now has powerful new capabilities that make it easier for enterprises to maintain high-performing ML models on any platform.
DMM lets organizations automate the monitoring of model inputs and outputs to detect changes in production data that could signal when a model is no longer producing results that are consistent with current business conditions.
Undetected data and model drift are especially problematic during a pandemic, since drastic changes to the economic environment and human behavior increase the likelihood of model inaccuracy and the associated risks of financial loss and a degraded customer experience.
The latest update includes new trend analysis capabilities that offer better insight into how the quality of a model’s predictions have been changing over time. It also includes new traffic charts to track the volume of model predictions and ground truth data (actual results) over time.
Advanced enterprise-grade authentication and security
Domino broadens its enterprise-grade authentication capabilities to include options for certification of Domino APIs and third-party services via short-lived Domino identity (OpenID) tokens to connect to any external authentication service.
When combined with its robust SSO capabilities, these enhancements make it easier for Domino administrators to grant or revoke user access while limiting where users are able to connect from.
Domino has also significantly enhanced its internal processes and tooling to comply with enterprise application monitoring and security reporting requirements, for example:
- Domino logs can be exposed to Fluentd-compatible aggregation tools
- Application health metrics can be integrated into Prometheus monitoring systems
- Container and dependencies support vulnerability scanning and remediation
Red Hat announced updates to its portfolio of developer tools, bringing new capabilities that further equip customers to build, deploy and manage applications in Kubernetes-based environments.
With tools optimized for Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform, developers can tap into the benefits of Kubernetes—including speed, consistency, portability and scale—without extending development time or complexity.
The realities of today’s business environment are driving organizations toward more efficient and agile development and deployment approaches. This is the essence of cloud-native applications, where containers and Kubernetes are at the heart of these efforts.
However, it often requires a shift in the tooling and processes for development teams. OpenShift eases this transition, enabling organizations to lean into this new paradigm while continuing to use their current tools and skill sets, and maintaining and supporting existing applications.
Red Hat OpenShift 4.5 addresses the needs of both developers who are unfamiliar with Kubernetes and just want to code, as well as expert Kubernetes developers seeking maximum flexibility.
In addition, Red Hat continues to move toward a supported Kubernetes-native continuous delivery and GitOps solution based on ArgoCD, where Red Hat is working with the Argo open source community to drive faster innovation in this space.
Red Hat has made enhancements to a number of other important areas in the developer portfolio:
- CodeReady Workspaces 2.2 enables remote development teams to provision and share environments with the click of a button, enabling faster starts and best-of-breed, low-latency interactions.
- Container builds continue to evolve in OpenShift with developer preview support for Buildpacks and Kaniko alongside Source-to-Image and Dockerfile builds through Buildah.
- Helm 3.2 is now a core part of OpenShift with a web console that simplifies working with charts and releases.
- odo 2 is also included with OpenShift and provides a new way for developers to iterate on code with its command line interface supporting Kubernetes as well as OpenShift, open model for tools through a standard definition, and rapid iterative Java development using Quarkus.
- OpenShift Serverless support of Knative serving and eventing enables developers to build serverless and event-driven applications that include Strimzi (Apache Kafka on Kubernetes) and service mesh.
- Finally, as continuous integration (CI) tools have become integral to development teams, Red Hat has expanded the functionality of Tekton in OpenShift Pipelines, and added OpenShift plugins for GitHub Actions, Microsoft Azure DevOps, Jenkins, and GitLab runner support.
Brad Micklea, vice president, Developer Tools, Program and Advocacy, Red Hat: “Red Hat OpenShift began as a developer-focused application platform and that ethos didn’t change when it adopted Kubernetes as its execution engine.
“We’ve continued to balance investment in new and unique tools to simplify Kubernetes for developers, with a broad set of plugins to popular IDEs and CI/CD systems so teams aren’t forced to change their toolset when they move to containers and Kubernetes for their deployed applications.
“OpenShift 4.5 shows continued acceleration in these areas, and is evidence of why IDC said that OpenShift ‘represents a breakthrough in the space of cloud-native development tools.’”
A vulnerability (CVE-2020-10713) in the widely used GRUB2 bootloader opens most Linux and Windows systems in use today to persistent compromise, Eclypsium researchers have found. The list of affected systems includes servers and workstations, laptops and desktops, and possibly a large number of Linux-based OT and IoT systems.
What’s more, the discovery of this vulnerability has spurred a larger effort to audit the GRUB2 code for flaws and, as a result, seven CVE-numbered flaws and many others without a CVE have been brought to light (and have or will be fixed).
CVE-2020-10713, named “BootHole” by the researchers who discovered it, can be used to install persistent and stealthy bootkits or malicious bootloaders that will operate even when the Secure Boot protection mechanism is enabled and functioning.
“The vulnerability affects systems using Secure Boot, even if they are not using GRUB2. Almost all signed versions of GRUB2 are vulnerable, meaning virtually every Linux distribution is affected,” the researchers explained.
“In addition, GRUB2 supports other operating systems, kernels and hypervisors such as Xen. The problem also extends to any Windows device that uses Secure Boot with the standard Microsoft Third Party UEFI Certificate Authority. Thus the majority of laptops, desktops, servers and workstations are affected, as well as network appliances and other special purpose equipment used in industrial, healthcare, financial and other industries. This vulnerability makes these devices susceptible to attackers such as the threat actors recently discovered using malicious UEFI bootloaders.”
The researchers have done a good job explaining in detail why the why, where and how of the vulnerability, and so did Kelly Shortridge, the VP of Product Management and Product Strategy at Capsule8. The problem effectively lies in the fact that a GRUB2 configuration file can be modified by attackers to make sure that their own malicious code runs before the OS is loaded.
The only good news is that the vulnerability can’t be exploited remotely. The attacker must first gain a foothold on the system and escalate privileges to root/admin in order to exploit it. Alternatively, they must have physical access to the target system.
The real danger, according to Shortridge, is if criminals incorporate this vulnerability into a bootkit, license it to bot authors, who will deploy or sell the bootkit-armed bots.
“This pipeline will not pop out pwnage overnight, so the question becomes whether mitigations can be successfully rolled out before criminals can scale this attack,” she noted.
A complex mitigation process
The main problem is that fixing this flaw on such a great number of systems will be a massive, complex and partly manual undertaking.
“Full mitigation of this issue will require coordinated efforts from a variety of entities: affected open-source projects, Microsoft, and the owners of affected systems, among others,” Eclypsium researchers noted.
“This will include: updates to GRUB2 to address the vulnerability; Linux distributions and other vendors using GRUB2 will need to update their installers, bootloaders, and shims [a small app that contains the vendor’s certificate and code that verifies and runs the GRUB2 bootloader]; new shims will need to be signed by the Microsoft 3rd Party UEFI CA; administrators of affected devices will need to update installed versions of operating systems in the field as well as installer images, including disaster recovery media; and eventually the UEFI revocation list (dbx) needs to be updated in the firmware of each affected system to prevent running this vulnerable code during boot.”
Again, both Eclypsium and Shortridge helpfully explained in detail the whole process and the dangers it holds for organizations. In addition to the complex hoop jumping of the mitigation process, orgs should also be monitoring their systems for threats and ransomware that use vulnerable bootloaders to infect or damage systems.
Eclypsium researchers have provided recommendations and have linked to the various reference materials by Microsoft, Debian, Canonical, Red Hat, HPE, SUSE, VMware and others who need to help users and admins fix the problem.
They’ve also powershell and bash scripts to help administrators identify certificates revoked by the various OS vendors when they push out security updates for CVE-2020-10713.
Other discovered vulnerabilities
After being notified of the existence of BootHole, Canonical (the company that develops Ubuntu) and others went in search for other security holes in GRUB2. They discovered seven related vulnerabilities, whose mitigations are included in today’s release for Ubuntu and other major Linux distributions.
“Given the difficulty of this kind of ecosystem-wide update/revocation, there is a strong desire to avoid having to do it again six months later,” Eclypsium researchers noted.
“To that end, a large effort — spanning multiple security teams at Oracle, Red Hat, Canonical, VMware, and Debian — using static analysis tools and manual review helped identify and fix dozens of further vulnerabilities and dangerous operations throughout the codebase that do not yet have individual CVEs assigned.”
Red Hat announced that Red Hat Enterprise Linux provides the operating system backbone for the top three supercomputers in the world and four out of the top 10, according to the newest TOP500 ranking.
Already serving as a catalyst for enterprise innovation across the hybrid cloud, these rankings also show that the world’s leading enterprise Linux platform can deliver a foundation to meet even the most demanding computing environments.
In the top ten of the current TOP500 list, Red Hat Enterprise Linux serves as the operating system for:
- Fugaku, the top-ranked supercomputer in the world based at RIKEN Center for Computational Sciences in Kobe, Japan.
- Summit, the number two-ranked supercomputer based at Oak Ridge National Laboratory in Oak Ridge, Tennessee.
- Sierra, the third-ranked supercomputer globally based at Lawrence Livermore National Laboratory in Livermore, California.
- Marconi-100, the ninth-ranked supercomputer installed at CINECA research center in Italy.
High-performance computing across architectures
Red Hat Enterprise Linux is engineered to deliver a consistent, standardized and high-performance experience across nearly any certified architecture and hardware configuration. These same exacting standards and consistency are also brought to supercomputing environments, providing a predictable and reliable interface regardless of the underlying hardware.
Fugaku is the first Arm-based system to take first place on the TOP500 list, highlighting Red Hat’s commitment to the Arm ecosystem from the datacenter to the high-performance computing laboratory.
Sierra, Summit and Marconi-100 all boast IBM POWER9-based infrastructure with NVIDIA GPUs; combined, these four systems produce more than 680 petaflops of processing power to fuel a broad range of scientific research applications.
In addition to enabling this immense computation power, Red Hat Enterprise Linux also underpins six out of the top 10 most power-efficient supercomputers on the planet according to the Green500 list.
Systems on the list are measured in terms of both performance results and the power consumed achieving those. When it comes to sustainable supercomputing the premium is put on finding a balanced approach for the most energy-efficient performance.
In the top ten of the Green500 list, Red Hat Enterprise Linux serves as the operating system for:
- A64FX prototype, at number four, was created as the prototype system to test and develop the Fugaku supercomputer and is based at Fujitsu’s plant in Numazu, Japan.
- AIMOS, the number five supercomputer on the Green500 list based at Rensselaer Polytechnic Institute in Troy, New York.
- Satori, the seventh-ranked most power-efficient system in the world, installed at MIT Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts. It serves as the home for the Mass Open Cloud (MOC) project, where Red Hat supports a number of activities.
- Summit at number eight.
- Fugaku at number nine.
- Marconi-100 at number ten.
From the laboratory to the datacenter and beyond
Modern supercomputers are no longer purpose-built monoliths constructed from expensive bespoke components. Each supercomputer deployment powered by Red Hat Enterprise Linux uses hardware that can be purchased and integrated into any datacenter, making it feasible for organizations to use enterprise systems that are similar to those breaking scientific barriers.
Regardless of the underlying hardware, Red Hat Enterprise Linux provides the common control plane for supercomputers to be run, managed and maintained in the same manner as traditional IT systems.
Red Hat Enterprise Linux also opens supercomputing applications up to advancements in enterprise IT, including Linux containers. Working closely in open source communities with organizations like the Supercomputing Containers project, Red Hat is helping to drive advancements to make Podman, Skopeo and Buildah, components of Red Hat’s distributed container toolkit, more accessible for building and deploying containerized supercomputing applications.
Stefanie Chiras, vice president and general manager, Red Hat Enterprise Linux Business Unit, Red Hat: “Supercomputing is no longer the domain of custom-built hardware and software. With the proliferation of Linux across architectures, high-performance computing has now become about delivering scalable computational power to fuel scientific breakthroughs.
“Red Hat Enterprise Linux already provides the foundation for innovation to the enterprise world and, with the recent results of the TOP500 list, we’re pleased to now provide this same accessible, flexible and open platform to the world’s fastest and some of the most power-efficient computers.”
Steve Conway, senior adviser, HPC Market Dynamics, Hyperion Research: “Every one of the world’s TOP500 most powerful supercomputers runs on Linux, and a recent study we did confirmed that Red Hat is the most popular vendor-supported Linux solution in the global high performance computing market.
“Red Hat Enterprise Linux is designed to run seamlessly on a variety of architectures underlying leading supercomputers, playing an important part in driving HPC into new markets and use cases, including AI, enterprise computing, quantum computing and cloud computing.”
Microsoft has added support for Linux and Android to Microsoft Defender ATP, its unified enterprise endpoint security platform.
Microsoft Defender Advanced Threat Protection is designed to help enterprises prevent, detect, investigate, and respond to advanced cyber threats on company endpoints from one central point.
Microsoft Defender ATP for Linux
“Adding Linux into the existing selection of natively supported platforms by Microsoft Defender ATP marks an important moment for all our customers. It makes Microsoft Defender Security Center a truly unified surface for monitoring and managing security of the full spectrum of desktop and server platforms that are common across enterprise environments (Windows, Windows Server, macOS, and Linux),” noted Helen Allas, a principal program manager at Microsoft.
Microsoft Defender ATP for Linux supports the most recent versions of CentOS Linux, Debian, Oracle Linux, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES) and Ubuntu.
“This initial release delivers strong preventive capabilities, a full command line experience on the client to configure and manage the agent, initiate scans, manage threats, and a familiar integrated experience for machines and alert monitoring in the Microsoft Defender Security Center,” Allas explained.
Microsoft Defender ATP for Linux requires the Microsoft Defender ATP for Servers license and can be deployed and configured using the Puppet or Ansible configuration management tool or the organization’s existing Linux configuration management tool.
Further requirements and info about deployment and use are available here.
Microsoft Defender ATP for Android
Microsoft has also announced on Tuesday the public preview of Defender ATP for Android.
Microsoft Defender ATP for Android will automatically block access to unsafe/phishing websites from SMS/text, WhatsApp, email, browsers, and other apps, as well as block unsafe network connections that apps might make on the user’s behalf.
Users will be informed about it and asked if they want to proceed, report the block, or dismiss the notification.
Microsoft Defender ATP for Android is also capable of detecting malicious apps, potentially unwanted applications and malicious files on the protected device.
“Additional layers of protection against malicious access to sensitive corporate information is offered by integrating with Microsoft Endpoint Manager, which includes both Microsoft Intune and Configuration Manager,” explained Kanishka Srivastava, a senior program manager at Microsoft.
“For example, a compromised device would be blocked from accessing Outlook email. When Microsoft Defender ATP for Android finds that a device has malicious apps installed, it will classify the device as ‘high risk’ and will flag it in the Microsoft Defender Security Center. Microsoft Intune uses the device’s risk level in conjunction with pre-defined compliance polices to activate Conditional Access rules that block access to corporate assets from the high risk device. (…) Once the malicious app is uninstalled, access to corporate assets is restored automatically for the mobile device.”
Enterprise admins will be able to see the alerts, threats and activities in the Microsoft Defender Security Center and make appropriate decisions.
Srivastava added that more capabilities for Android will be rolled our in the coming months and that Microsoft Defender ATP for iOS will be released later this year.
Red Hat, the world’s leading provider of open source solutions, announced new offerings to help organizations of all sizes and industries optimize, scale or simply protect IT operations in the face of shifting global dynamics.
Red Hat has long championed technology evolutions and wants to enable customers to build any application and deploy everywhere with the consistency and flexibility an open hybrid cloud foundation provides.
Building on this vision, Red Hat’s new offerings are designed to improve the delivery, accessibility and stability of critical services and applications on a worldwide scale on the backbone of the hybrid cloud.
More than ever before, Red Hat sees a need for IT to evolve to meet rapidly expanding demand for always-on digital services and ever-present connectivity. Nearly every industry, including healthcare, logistics, retail, financial services, government, education and more, is adapting in real-time to meet demand for faster, more widespread access to essential applications and services while maintaining operational stability.
In the telecommunications industry, for example, traffic has spiked by more than 50% in some global regions. This surge has led telecommunications and service providers to expand capacity and speed up 5G deployments and edge computing, in turn driving examinations of network and cloud infrastructure readiness.
Red Hat believes that the necessary technologies for meeting these needs are not tied to legacy software stacks or rooted in expensive proprietary technologies. Instead, the answers will be driven by open source innovation, enabling organizations to take advantage of cloud-native platforms everywhere, from the edge and on-premises datacenters to multiple public clouds.
With open source technologies like Linux and Kubernetes, organizations not only have access to innovation that can help them build what’s next and keep them at the forefront of their industries, but also automate, adapt and scale existing operations across IT environments with greater flexibility than proprietary vendors can provide.
Traditional and cloud-native apps, unified and managed from the core to the edge
Red Hat’s leadership in open source communities is not for marketing or convenience. In Kubernetes, Red Hat was an early contributor alongside Google and remains the second-leading corporate contributor to Kubernetes. Red Hat is helping to advance the key technologies in Kubernetes and related communities which are enabling this cross-industry IT evolution.
With Red Hat OpenShift, Red Hat pioneered an enterprise Kubernetes platform that has enabled customers to embrace cloud-native approaches while also supporting existing traditional applications.
Red Hat OpenShift is trusted by customers across industries because of this differentiated approach. To help further eliminate the barriers between traditional and cloud-native applications, Red Hat is introducing capabilities that enable new workloads on OpenShift and that meet customers where they are.
Red Hat announcements
OpenShift virtualization, a new feature available as a Technology Preview within Red Hat OpenShift, derived from the KubeVirt open source project. It enables organizations to develop, deploy and manage applications consisting of virtual machines alongside containers and serverless, all in one modern platform that unifies cloud-native and traditional workloads.
While some vendors seek to protect legacy technology stacks by dragging Kubernetes and cloud-native functionality backwards to preserve proprietary virtualization, Red Hat does the opposite: Bringing traditional application stacks forward into a layer of open innovation, enabling customers to truly transform at their speed, not at the whims of proprietary lock-in.
Red Hat OpenShift 4.4, the latest version of the industry’s leading enterprise Kubernetes platform, which builds on the simplicity and scale of Kubernetes Operators. Rebased on Kubernetes 1.17, OpenShift 4.4 introduces a developer-centric view of platform metrics and monitoring for application workloads; monitoring integration for Red Hat Operators; cost management for assessing the resources and costs used for specific applications across the hybrid cloud; and much more.
To address the management challenges of running cloud-native applications across large-scale, production and distributed Kubernetes clusters, Red Hat is also introducing a new management solution.
Red Hat Advanced Cluster Management for Kubernetes, soon available as a Technology Preview, provides a single, simplified control point for the monitoring and deployment of OpenShift clusters at scale, offering policy-driven governance and application lifecycle management. Read more about Red Hat Advanced Cluster Management for Kubernetes here.
Delivering a foundation for innovation, everywhere and anywhere
Innovation is more than simply delivering new technologies. As with Red Hat’s entire open hybrid cloud portfolio, these solutions are backed by a complete ecosystem of supporting software, hardware and services and by Red Hat’s extensive expertise in integrating and operating open innovation.
Red Hat also delivers production confidence by maintaining a long-life, enterprise-class lifecycle for its entire product portfolio including these newly launched offerings.
These advancements are not contingent on a single piece of hardware or a sole cloud provider, as Red Hat delivers innovation fully across hybrid and multicloud footprints, including:
- Every major public cloud provider in Amazon Web Services, Google Cloud Platform, IBM Cloud and Microsoft Azure as well as many specialized cloud providers.
- Managed solutions through OpenShift Dedicated, Azure Red Hat OpenShift and IBM Red Hat OpenShift Kubernetes Service enabling organizations to gain the benefits of enterprise Kubernetes without the burden of infrastructure management.
- Support for multiple computing architectures, including x86, IBM Power and mainframes.
Beyond these new technologies, Red Hat has invested in helping organizations get the most from existing infrastructure. This includes:
- Enhancements to Red Hat Insights, Red Hat’s proactive security and risk management as-a-service offering, which makes it easier for IT teams to detect, diagnose and remediate potential problems before they impact production systems or end users. Insights is not an add-on, as it is available across every supported Red Hat Enterprise Linux subscription by default.
- Red Hat Ansible Automation Platform also helps to address the complexities of expanding network demand and infrastructure footprints by automating time-consuming manual tasks, helping IT teams to more effectively meet customer and end user needs beyond service uptime.
Red Hat training and certification are also available to IT teams seeking to quickly expand skillsets as connectivity needs evolve. From learning the basics of enterprise Kubernetes to gaining certification in telecommunications architecture, Red Hat’s expertise is available to help IT professionals gain new expertise and experience to better address the growing importance of the network.
Paul Cormier, president and CEO, Red Hat: “Perhaps more than ever before, the unique needs of every organization are in sharp focus – some need to scale operations immediately to meet relentless services demand while others seek to strengthen and maintain core IT operations.
“Rather than only provide technologies to address one need or the other, Red Hat provides a flexible, fully open set of solutions to our customers, meeting them where they are with what they need.
“This could be the world’s leading enterprise Linux platform to drive greater operational stability or the industry’s leading enterprise Kubernetes platform to help rapidly scale services for critical demands, all backed by our expertise, experience and commitment to helping global communities at large, not just our immediate customers.”
Red Hat Enterprise Linux 8.2, built for the interconnected nature of the hybrid cloud era, is designed to offer these capabilities and more, extending beyond the reliability, stability and production-readiness for which the platform is known. The latest additions to the Red Hat Enterprise Linux 8 platform help organizations recognize more value from existing Red Hat Enterprise Linux subscriptions with:
- New intelligent management and monitoring capabilities via updates to Red Hat Insights
- Enhanced container tools
- A smoother user experience for Linux experts and newcomers alike
As the world works to manage the COVID-19 pandemic, more and more IT organizations are operating remotely or with limited manpower. Now more than ever, IT teams need to be able to monitor, manage and analyze the underlying foundations of enterprise technology stacks, regardless of size, scale, complexity or where they reside across hybrid/multicloud footprints. Red Hat Enterprise Linux can help intelligently detect, diagnose and address potential issues before they impact production, driven by advancements in Red Hat Insights.
Red Hat Insights, Red Hat’s proactive operations and security risk management offering, is included in Red Hat Enterprise Linux subscriptions for versions 6.4 and higher. The latest updates to the service add new use case functionality and features including:
Improved visibility into IT security, compliance postures and operational efficiencies helping to eliminate manual methods and improve productivity in managing large and complex environments while enhancing security and compliance across these deployments.
New Policies and Patch services to help organizations define and monitor important internal policies and determine which Red Hat product advisories apply to Red Hat Enterprise Linux instances as well as guidance for remediation.
Drift service to help IT teams compare systems to baselines, providing a benchmark to guide strategies for reducing complexity and expediting troubleshooting.
Additional monitoring and performance updates in Red Hat Enterprise Linux 8.2 include:
- Improved resource management with Control Groups (cgroup) v2, which is designed to help limit memory usage through reserving memory and setting usage floors/limits. This helps prevent specific processes from overconsuming memory and causing system failures or slowdowns.
- Better capabilities for optimizing performance-sensitive workloads through NUMA and sub-NUMA service policies.
- Performance Co-Pilot (PCP) 5.0.2 which adds new collection agents for Microsoft SQL Server 2019 to help collect and analyze a wide array of SQL Server-related metrics, providing a clearer picture for database and operating system tuning.
- Red Hat subscription watch, a software-as-a-service (SaaS) tool that enables customers to more easily view and manage Red Hat Enterprise Linux and Red Hat OpenShift Container Platform subscriptions across hybrid cloud infrastructure.
Evolved container tools to build for the future
While containerized workloads provide a clear path towards digital transformation and a cloud-native future, the tools used to build these applications must balance the latest, up-to-date innovations with a stable and supported lifecycle. In Red Hat Enterprise Linux 8.2, an updated application stream of Red Hat’s container tools is available, supported for 24 months. Additionally, for organizations looking to build containers inside of containers for additional layers of isolation and security, containerized versions of Skopeo and Buildah are available in Tech Preview.
To further extend the security of containerized workloads, Red Hat Enterprise Linux 8.2 introduces Udica, a new tool for more easily creating customized, container-centric SELinux security policies. When applied to a specific workload, Udica can reduce the risk that a process can “break out” of a container and cause problems across other containers or to the host itself.
Red Hat Enterprise Linux 8.2 also introduces enhancements to the Red Hat Universal Base Image, including:
- OpenJDK and .NET 3.0 for expanded developer choice in building Red Hat certification-ready cloud-native applications
- Improved access to source code associated with a given image through a single command, making it easier for Red Hat partners to meet source code requirements for open source licensing needs.
Red Hat, the world’s leading provider of open source solutions, highlighted that more organizations are using Red Hat OpenShift as the foundation for building artificial intelligence (AI) and machine-learning (ML) data science workflows and AI-powered intelligent applications.
OpenShift helps to provide agility, flexibility, portability and scalability across the hybrid cloud, from cloud infrastructure to edge computing deployments, a necessity for developing and deploying ML models and intelligent applications into production more quickly and without vendor lock-in.
As a production-proven enterprise container and Kubernetes platform, OpenShift delivers integrated DevOps capabilities for independent software vendors (ISVs) via Kubernetes Operators and NVIDIA GPU-powered infrastructure platforms.
This combination can help organizations simplify the deployment and lifecycle management of AI/ML toolchains as well as support hybrid cloud infrastructure. With these enhancements, data scientists and software developers are empowered to better collaborate and innovate in the hybrid cloud rather than simply manage infrastructure resource requests.
Customer and ecosystem interest in AI/ML
The customer momentum seen by Red Hat validates the AI/ML findings from the recent 2020 Red Hat Global Customer Tech Outlook report. The report surveyed 876 Red Hat customers on their top IT priorities and found that 30% of respondents plan on using AI/ML over the next 12 months, ranking AI/ML as the top emerging technology workload consideration for companies surveyed in 2020.
For example, Kasikorn Business-Technology Group (KBTG) supports the day-to-day operations of KBank, one of Thailand’s largest commercial banks, and also provides technology developer and partner services for fintech firms across Thailand.
To support the doubling of KBank’s user base, KBTG developed K PLUS AI-Driven Experience (KADE) to help analyze customer behavior and deliver a more personalized experience and also launched UCenter, a unified notification feed system, built and deployed on Red Hat OpenShift.
Other customer cases of AI/ML solutions on Red Hat OpenShift include Boston Children’s Hospital and more.
Along with these customers using OpenShift to accelerate AI/ML workflows and deliver AI-powered intelligent applications, AI/ML ISV partners including CognitiveScale, Dotscience, NVIDIA and Seldon have recently developed OpenShift integrations via certified Kubernetes Operators.
OpenShift is also powering IBM Cloud Paks to help customers accelerate their journey to the cloud and transform business operations in support of new workloads, including AI/ML/DL.
Additionally, to streamline the adoption of AI-enabled infrastructure in the enterprise datacenters, Red Hat has collaborated with Hewlett Packard Enterprise (HPE) and NVIDIA on a new Accelerated AI Reference Architecture, which offers design and deployment guidelines to help mutual customers bring AI-based applications to production more quickly.
Driving open AI innovation
Red Hat continues to be an active contributor to the Kubeflow open source community project, which focuses on simplifying ML workflows for Kubernetes while enhancing workload portability and scalability.
Kubeflow can now run on OpenShift as documented here, and a Kubeflow Kubernetes Operator is in development to help simplify the deployment and lifecycle management of Kubeflow on OpenShift.
Additionally, Red Hat leads the Open Data Hub community project to provide a blueprint for building an AI-as-a-Service platform with Red Hat OpenShift, Red Hat Ceph Storage and more.
Open Data Hub v0.5.1 is now available and includes tools like JupyterHub 3.0.7, Apache Spark Operator 1.0.5 for managing spark clusters on OpenShift and the Apache Superset data exploration and visualization tool.
A vulnerability (CVE-2020-8597) in the Point-to-Point Protocol Daemon (pppd) software, which comes installed on many Linux-based and Unix-like operating systems and networking devices, can be exploited by unauthenticated attackers to achieve code execution on – and takeover of – a targeted system.
The vulnerability affects Debian GNU/Linux, NetBSD, Red Hat, Ubuntu, OpenWRT, TP-LINK and Cisco offerings, and other software/products.
About the vulnerability (CVE-2020-8597)
Pppd is a daemon that is used to manage PPP session establishment and session termination between two nodes on Unix-like operating systems.
CVE-2020-8597 is a buffer overflow vulnerability that arose due to a flaw in Extensible Authentication Protocol (EAP) packet processing in eap_request and eap_response subroutines.
It can be exploited remotely, without authentication, by simply sending an unsolicited, specially crafted EAP packet to a vulnerable ppp client or server.
The flaw was discovered and responsibly disclosed by Ilja Van Sprundel, Director of Penetration Testing at IOActive.
It affects pppd versions 2.4.2 through 2.4.8 and has been patched in early February.
“PPP is the protocol used for establishing internet links over dial-up modems, DSL connections, and many other types of point-to-point links including Virtual Private Networks (VPN) such as Point to Point Tunneling Protocol (PPTP). The pppd software can also authenticate a network connected peer and/or supply authentication information to the peer using multiple authentication protocols including EAP,” IOActive explained in a security advisory.
“Due to a flaw in the Extensible Authentication Protocol (EAP) packet processing in the Point-to-Point Protocol Daemon (pppd), an unauthenticated remote attacker may be able to cause a stack buffer overflow, which may allow arbitrary code execution on the target system. This vulnerability is due to an error in validating the size of the input before copying the supplied data into memory. As the validation of the data size is incorrect, arbitrary data can be copied into memory and cause memory corruption possibly leading to execution of unwanted code.”
“Update your software with the latest available patches provided by your software vendor,” IOActive advises. “It is incorrect to assume that pppd is not vulnerable if EAP is not enabled or EAP has not been negotiated by a remote peer using a secret or passphrase. This is due to the fact that an authenticated attacker may still be able to send unsolicited EAP packet to trigger the buffer overflow.”
CERT/CC’s advisory provides up-to-date information about affected products by various vendors and links to those vendors advisories, which then link to fixes (when they are made available).
Tenable says that there are still no working PoCs for this vulnerability, but that they soon might be.
“One appears to be a work-in-progress, while another claims that a PoC will be released for this vulnerability ‘in a week or two when things die down.’”
Red Hat, the world’s leading provider of open source solutions, announced Red Hat JBoss Enterprise Application Platform (JBoss EAP) 7.2 has been awarded Common Criteria Certification at Evaluation Assurance Level (EAL) 4+ by the Italian Common Criteria scheme Organismo di Certificazione della Sicurezza Informatica (OCSI).
The certification provides government agencies, financial institutions, and customers in other security-sensitive and regulated environments the assurance and confidence that JBoss EAP 7.2 meets government security standards.
This achievement demonstrates Red Hat’s industry leadership in technology and security. This is the third time JBoss EAP has achieved Common Criteria certification. In 2015, JBoss EAP 6.2 also achieved recognition at the EAL4+ assurance level.
Red Hat’s latest certification will be recognized by all countries under the Common Criteria Recognition Arrangement (CCRA) at Evaluation Assurance Level 2 since there is no generally agreed criteria for higher assurance levels.
The Common Criteria is an internationally recognized set of standards used by the federal government and organizations to assess the security and assurance of technology offerings. EAL categorizes the depth and rigor of the evaluation, and EAL4+ assures consumers that the software has been methodically designed, tested, and reviewed to meet the evaluation criteria.
Red Hat worked with atsec information security, a government accredited laboratory in the United States, Germany, Sweden, Singapore and Italy to complete the certification. atsec tested and validated the security, performance and reliability of the solution against the Common Criteria Standard for Information Security Evaluation (ISO/IEC 15408) at EAL4+.
Paul Smith, senior vice president and general manager, Public Sector, Red Hat: “We’re exceptionally proud that Red Hat JBoss Enterprise Application Platform again has achieved the Common Criteria Certification.
“It is important that our customers know they are getting the highest standard of security when they use JBoss EAP, especially those in highly regulated industries. Common Criteria accreditation is a rigorous security standard and means customers can confidently trust Red Hat with sensitive applications, services and data.
“Repeatedly achieving this accreditation is a key value of the Red Hat subscription, and one that differentiates enterprise-class open source, and proves our on-going dedication to providing top solutions to security-conscious customers.”
Kenneth Hake, Common Criteria laboratory manager, atsec U.S.: “We are proud to continue to be Red Hat’s laboratory of choice for evaluating its products for Common Criteria Certification. The completion of this certification for JBoss Enterprise Application Platform 7.2 means that the product meets rigorous security standards at the EAL 4+.
“The evaluation included the security functionality of Access Control, Role-based Access Control, Audit, Clustering, Identification and Authentication, and Transaction Rollback within the scope.”