Intel unveiled the suite of new security features for the upcoming 3rd generation Intel Xeon Scalable platform, code-named “Ice Lake.”
Intel is doubling down on its Security First Pledge, bringing its pioneering and proven Intel Software Guard Extension (Intel SGX) to the full spectrum of Ice Lake platforms, along with new features that include Intel Total Memory Encryption (Intel TME), Intel Platform Firmware Resilience (Intel PFR) and new cryptographic accelerators to strengthen the platform and improve the overall confidentiality and integrity of data.
Data is a critical asset both in terms of the business value it may yield and the personal information that must be protected, so cybersecurity is a top concern.
The security features in Ice Lake enable Intel’s customers to develop solutions that help improve their security posture and reduce risks related to privacy and compliance, such as regulated data in financial services and healthcare.
“Protecting data is essential to extracting value from it, and with the capabilities in the upcoming 3rd Gen Xeon Scalable platform, we will help our customers solve their toughest data challenges while improving data confidentiality and integrity. This extends our long history of partnering across the ecosystem to drive security innovations,” said Lisa Spelman, corporate vice president, Data Platform Group and general manager, Xeon and Memory Group at Intel.
Data protection across the compute stack
Technologies such as disk- and network-traffic encryption protect data in storage and during transmission, but data can be vulnerable to interception and tampering while in use in memory.
“Confidential computing” is a rapidly emerging usage category that protects data while it is in use in a Trusted Execution Environment (TEE). Intel SGX is the most researched, updated and battle-tested TEE for data center confidential computing, with the smallest attack surface within the system. It enables application isolation in private memory regions, called enclaves, to help protect up to 1 terabyte of code and data while in use.
“Microsoft Azure was the first major public cloud to offer confidential computing, and customers from industries including finance, healthcare, government are using confidential computing on Azure today,” said Mark Russinovich, CTO, Microsoft Azure.
“Azure has confidential computing options for virtual machines, containers, machine learning, and more. We believe the next generation Intel Xeon processors with Intel SGX featuring full memory encryption and cryptographic acceleration will help our customers unlock even more confidential computing scenarios.”
Customers like the University of California San Francisco, NEC, Magnit and other organizations in highly regulated industries have relied on Intel to support their security strategy and leveraged Intel SGX with proven results. For example, healthcare organizations can more securely protect data — including electronic health records — with a trusted computing environment that better preserves patient privacy.
In other industries, such as retail, companies rely on Intel to help keep data confidential and protect intellectual property. Intel SGX helps customers unlock new multi-party shared compute scenarios that have been difficult to build in the past due to privacy, security and regulatory requirements.
Full memory encryption
To better protect the entire memory of a platform, Ice Lake introduces a new feature called Intel Total Memory Encryption (Intel TME). Intel TME helps ensure that all memory accessed from the Intel CPU is encrypted, including customer credentials, encryption keys and other IP or personal information on the external memory bus.
Intel developed this feature to provide greater protection for system memory against hardware attacks, such as removing and reading the dual in-line memory module (DIMM) after spraying it with liquid nitrogen or installing purpose-built attack hardware.
Using the NIST storage encryption standard, AES XTS, an encryption key is generated using a hardened random number generator in the processor without exposure to software. This allows existing software to run unmodified while better protecting memory.
One of Intel’s design goals is to remove or reduce the performance impact of increased security so customers don’t have to choose between better protection and acceptable performance. Ice Lake introduces several new instructions used throughout the industry, coupled with algorithmic and software innovations, to deliver breakthrough cryptographic performance.
There are two fundamental innovations. The first is a technique to stitch together the operations of two algorithms that typically run in combination yet sequentially, allowing them to execute simultaneously. The second is a method to process multiple independent data buffers in parallel.
Sophisticated adversaries may attempt to compromise or disable the platform’s firmware to intercept data or take down the server. Ice Lake introduces Intel® Platform Firmware Resilience (Intel PFR) to the Intel Xeon Scalable platform to help protect against platform firmware attacks, designed to detect and correct them before they can compromise or disable the machine.
Intel PFR uses an Intel FPGA as a platform root of trust to validate critical-to-boot platform firmware components before any firmware code is executed. The firmware components protected can include BIOS Flash, BMC Flash, SPI Descriptor, Intel Management Engine and power supply firmware.
Privacy-preserving, trusted platforms in the upcoming 3rd generation Xeon Scalable processors will help drive even greater innovative services, usage models and solutions for organizations looking to activate the full value of their data.
Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.
“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.
Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.
“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)
What do we talk about when we talk about hardware security?
Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.
Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.
“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.
Hardware security challenges
But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.
She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.
“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”
Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.
“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.
“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”
Achieving security in low-end systems on chips
The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.
As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.
Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.
“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.
“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”
For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.
“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.
Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers.
With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024.
“By 2023, up to 70% of all enterprises will process data at the edge. 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers’ current needs, while setting the foundation for capabilities with advancements in AI and 5G,” said John Healy, Intel vice president of the Internet of Things Group and general manager of Platform Management and Customer Engineering.
Why It’s important
Intel works closely with customers to build proofs of concept, optimize solutions and collect feedback along the way. Innovations delivered with 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors are a response to challenges felt across the IoT industry: edge complexity, total cost of ownership and a range of environmental conditions.
Combining a common and seamless developer experience with software and tools like the Edge Software Hub’s Edge Insights for Industrial and the Intel Distribution of OpenVINO toolkit, Intel helps customers and developers get to market faster and deliver more powerful outcomes with optimized, containerized packages to enable sensing, vision, automation and other transformative edge applications.
For example, when combined with 11th Gen’s SuperFin process improvements and other enhancements, OpenVINO running on an 11th Gen Core i5 delivers amazing AI performance: up to 2 times faster inferences per second than a prior 8th Gen Core i5-8500 processor when running on just the CPU in each product.
About 11th Gen Core processors
Building on the recently announced client processors, 11th Gen Core is enhanced specifically for essential IoT applications that require high-speed processing, computer vision and low-latency deterministic computing.
It delivers up to a 23% performance gain in single-thread performance, a 19% gain in multithread performance and up to a 2.95x performance gain in graphics gen on gen. New dual-video decode boxes allow the processor to ingest up to 40 simultaneous video streams at 1080p 30 frames per second and output up to four channels of 4K or two channels of 8K video.
AI-inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with vector neural network instructions (VNNI) built in.
With Intel Time Coordinated Computing (Intel TCC Technology) and time-sensitive networking (TSN) technologies, 11th Gen processors enable real-time computing demands while delivering deterministic performance across a variety of use cases:
- Industrial sector: Mission-critical control systems (PLC, robotics, etc.), industrial PCs and human-machine interfaces.
- Retail, banking and hospitality: Intelligent, immersive digital signage, interactive kiosks and automated checkout.
- Healthcare: Next-generation medical imaging devices with high-resolution displays and AI-powered diagnostics.
- Smart city: Smart network video recorders with onboard AI inferencing and analytics.
Intel’s 11th Gen Core processors already have over 90 partners committed to delivering solutions to meet customers’ demands.
About Intel Atom x6000E Series and Intel Pentium and Celeron N and J series processors
These represent Intel’s first processor platform enhanced for IoT. They deliver enhanced real-time performance and efficiency; up to 2 times better 3D graphics; a dedicated real-time offload engine; Intel Programmable Services Engine, which supports out-of-band and in-band remote device management; enhanced I/O and storage options; and integrated 2.5GbE time-sensitive networking.
They can support 4Kp60 resolution on up to three simultaneous displays, meet strict functional safety requirements with the Intel Safety Island and include built-in hardware-based security. These processors have a variety of use cases, including:
- Industrial: Real-time control systems and devices that meet functional safety requirements for industrial robots and for chemical, oil field and energy grid-control applications.
- Transportation: Vehicle controls, fleet monitoring and management systems that synchronize inputs from multiple sensors and direct actions in semiautonomous buses, trains, ships and trucks.
- Healthcare: Medical displays, carts, service robots, entry-level ultrasound machines, gateways and kiosks that require AI and computer vision with reduced energy consumption.
- Retail and hospitality: Fixed and mobile point-of-sale systems for retail and quick service restaurant with high-resolution graphics.
On this September 2020 Patch Tuesday:
- Microsoft has plugged 129 security holes, including a critical RCE flaw that could be triggered by sending a specially crafted email to an affected Exchange Server installation
- Adobe has delivered security updates for Adobe Experience Manager, AEM Forms, Framemaker and InDesign
- Intel has released four security advisories
- SAP has released 10 security notes and updates to six previously released notes
Microsoft has released patches for 129 CVEs, 23 of which are “critical”, 105 “important”, and one “medium”-risk (a security feature bypass flaw in SQL Server Reporting Services). None of them are publicly known or being actively exploited.
Trend Micro Zero Day Initiative’s Dustin Childs says that patching CVE-2020-16875, a memory corruption vulnerability in Microsoft Exchange, should be top priority for organizations using the popular mail server.
“This patch corrects a vulnerability that allows an attacker to execute code at SYSTEM by sending a specially crafted email to an affected Exchange Server. That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” he explained. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon.”
Another interesting patch released this month is that for CVE-2020-0951, a security feature bypass flaw in Windows Defender Application Control (WDAC). Patches are available for Windows 10 and Windows Server 2016 and above.
“This patch is interesting for reasons beyond just the bug being fixed. An attacker with administrative privileges on a local machine could connect to a PowerShell session and send commands to execute arbitrary code. This behavior should be blocked by WDAC, which does make this an interesting bypass. However, what’s really interesting is that this is getting patched at all,” Childs explained.
“Vulnerabilities that require administrative access to exploit typically do not get patches. I’m curious about what makes this one different.”
Many of the critical and important flaws fixed this time affect various editions of Microsoft SharePoint (Server, Enterprise, Foundation). Some require authentication, but many do not, so if you don’t want to fall prey to exploits hidden in specially crafted web requests, pages or SharePoint application packages, see that you install the required updates soon.
Satnam Narang, staff research engineer at Tenable, pointed out that one of them – CVE-2020-1210 – is reminiscent of a similar SharePoint remote code execution flaw, CVE-2019-0604, that has been exploited in the wild by threat actors since at least April 2019.
CVE-2020-0922, a RCE in Microsoft COM (Common Object Model), should also be patched quickly on all Windows and Windows Server systems.
He also advised organizations in the financial industry who use Microsoft Dynamics 365 for Finance and Operations (on-premises) and Microsoft Dynamics 365 (on-premises) to quickly patch CVE-2020-16857 and CVE-2020-16862.
“Impacting the on-premise servers with this finance and operations focused service installed, both exploits require a specifically created file to exploit the security vulnerability, allowing the attacker to gain remote code execution capability. More concerning with these vulnerabilities is that both flaws, if exploited, would allow an attacker to steal documents and data deemed critical. Due to the nature and use of Microsoft Dynamics in the financial industry, a theft like this could spell trouble for any company of any size,” he added.
Jimmy Graham, Sr. Director of Product Management, Qualys, says that Windows Codecs, GDI+, Browser, COM, and Text Service Module vulnerabilities should be prioritized for workstation-type devices.
Adobe has released security updates for Adobe Experience Manager (AEM) – a web-based client-server system for building, managing and deploying commercial websites and related services – and the AEM Forms add-on package for all platforms, Adobe Framemaker for Windows and Adobe InDesign for macOS.
The AEM and AEM Forms updates are more important than the rest.
The Adobe Framemaker update fixes two critical flaws that could lead to code execution, and the Adobe InDesign update five of them, but as vulnerabilities in these two offerings are not often targeted by attackers, admins are advised to implement them after more critical updates are secured.
None of the fixed vulnerabilities are being currently exploited in the wild.
Intel took advantage of the September 2020 Patch Tuesday to release four advisories, accompanying fixes for the Intel Driver & Support Assistant, BIOS firmware for multiple Intel Platforms, and Intel Active Management Technology (AMT) and Intel Standard Manageability (ISM).
The latter fixes are the most important, as they fix a privilege escalation flaw that has been deemed to be “critical” for provisioned systems.
SAP marked the September 2020 Patch Tuesday by releasing 10 security notes and updates to six previously released ones (for SAP Solution Manager, SAP NetWeaver, SAPUI5 and SAP NetWeaver AS JAVA).
Patches have been provided for newly fixed flaws in a variety of offerings, including SAP Marketing, SAP NetWeaver, SAP Bank Analyzer, SAP S/4HANA Financial Products, SAP Business Objects Business Intelligence Platform, and others.
August 2020 Patch Tuesday was expectedly observed by Microsoft and Adobe, but many other software firms decided to push out security updates as well. Apple released iCloud for Windows updates and Google pushed out fixes to Chrome. They were followed by Intel, SAP and Citrix. Intel’s updates It’s not unusual for Intel to take advantage of a Patch Tuesday. This time they released 18 advisories. Among the fixed flaws are: DoS, Information Disclosure and EoP … More
The post Intel, SAP, and Citrix release critical security updates appeared first on Help Net Security.
As communications service providers (CoSPs) evolve their networks to support the rollout of future 5G networks, they are increasingly adopting a software-defined, virtualized infrastructure. Virtualization of the core network has already enabled CoSPs to improve operational costs and bring services to market faster. This expanded collaboration between Intel and VMware aims to offer CoSPs reduced development cycles and scale across multiple designs.
Many CoSPs are embracing the idea of having open and disaggregated RAN architectures that can give them added flexibility and choice, as well as programmability to create and deploy new services that require fine grained radio resource control and dynamic slicing to provide differentiated experiences such as cloud gaming and cloud controlled robotics. This collaboration seeks to simplify the steps and reduce the integration effort involved in creating deployable virtualized RAN solutions.
Intel and VMware will work with a rich ecosystem, including telecom equipment manufacturers, original equipment manufacturers and RAN software vendors, to help CoSPs more easily build on top of the vRAN platform to address specific use cases. As part of this effort, Intel and VMware will collaborate in building programmable open interfaces that leverage Intel’s FlexRAN software reference architecture and a VMware RAN Intelligent Controller (RIC), to enable development of innovative radio network functions using AI/ML learning for real time resource management, traffic steering and dynamic slicing. This in turn will assist in optimized QoE for rollout of new 5G vertical use cases.
“Many CoSPs are choosing to extend the benefits of network virtualization into the RAN for increased agility as they roll out new 5G services, but the software integration can be rather complex. With an integrated vRAN platform, combined with leading technology and expertise from Intel VMware, CoSPs are positioned to benefit from accelerated time to deployment of innovative services at the edge of their network,” explained Dan Rodriguez, corporate vice president and general manager, Network Platforms Group, Intel.
“CoSPs around the globe rely on VMware’s Telco Cloud platform to deploy and manage myriad core network functions. As they look to extend their software-defined infrastructure out to the RAN, there are tremendous benefits to delivering all network functions on a single platform,” said Shekar Ayyar, executive vice president and general manager, Telco and Edge Cloud, VMware. “With an integrated platform, CoSPs will be able to deploy new network functions across the same Telco Cloud architecture, from core to RAN, enabling the scale and agility needed to deliver services across a 5G network more efficiently.”
Intel unveils 3rd Gen Intel Xeon Scalable processors, additions to its hardware and software AI portfolio
Intel introduced its 3rd Gen Intel Xeon Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network and intelligent-edge environments.
As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s new 3rd Gen Xeon Scalable processors makes AI inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.
“The ability to rapidly deploy AI and data analytics is essential for today’s businesses. We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world’s data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data.” – Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group
AI and analytics open new opportunities for customers across a broad range of industries, including finance, healthcare, industrial, telecom and transportation.
IDC predicts that by 2021, 75% of commercial enterprise apps will use AI1. And by 2025, IDC estimates that roughly a quarter of all data generated will be created in real time, with various internet of things (IoT) devices creating 95% of that volume growth2.
Intel’s new data platforms, coupled with a thriving ecosystem of partners using Intel AI technologies, are optimized for businesses to monetize their data through the deployment of intelligent AI and analytics services.
New 3rd gen Intel Xeon Scalable Processors
Intel is further extending its investment in built-in AI acceleration in the new 3rd Gen Intel Xeon Scalable processors through the integration of bfloat16 support into the processor’s unique Intel DL Boost technology.
Bfloat16 is a compact numeric format that uses half the bits as today’s FP32 format but achieves comparable model accuracy with minimal — if any — software changes required. The addition of bfloat16 support accelerates both AI training and inference performance in the CPU.
Intel-optimized distributions for leading deep learning frameworks (including TensorFlow and Pytorch) support bfloat16 and are available through the Intel AI Analytics toolkit. Intel also delivers bfloat16 optimizations into its OpenVINO toolkit and the ONNX Runtime environment to ease inference deployments.
The 3rd Gen Intel Xeon Scalable processors (code-named “Cooper Lake”) evolve Intel’s 4- and 8-socket processor offering. The processor is designed for deep learning, virtual machine (VM) density, in-memory database, mission-critical applications and analytics-intensive workloads.
Customers refreshing aging infrastructure can expect an average estimated gain of 1.9 times on popular workloads3 and up to 2.2 times more VMs4 compared with 5-year-old 4-socket platform equivalents.
New Intel Optane persistent memory
As part of the 3rd Gen Intel Xeon Scalable platform, the company also announced the Intel Optane® persistent memory 200 series, providing customers up to 4.5TB of memory per socket to manage data intensive workloads, such as in-memory databases, dense virtualization, analytics and high-powered computing.
New Intel 3D NAND SSDs
For systems that store data in all-flash arrays, Intel announced the availability of its next-generation high-capacity Intel 3D NAND SSDs, the Intel SSD D7-P5500 and P5600.
These 3D NAND SSDs are built with Intel’s latest triple-level cell (TLC) 3D NAND technology and an all-new low-latency PCIe controller to meet the intense IO requirements of AI and analytics workloads and advanced features to improve IT efficiency and data security.
First Intel AI-optimized FPGA
Intel disclosed its upcoming Intel Stratix 10 NX FPGAs, Intel’s first AI-optimized FPGAs targeted for high-bandwidth, low-latency AI acceleration. These FPGAs will offer customers customizable, reconfigurable and scalable AI acceleration for compute-demanding applications such as natural language processing and fraud detection.
Intel Stratix 10 NX FPGAs include integrated high-bandwidth memory (HBM), high-performance networking capabilities and new AI-optimized arithmetic blocks called AI Tensor Blocks, which contain dense arrays of lower-precision multipliers typically used for AI model arithmetic.
OneAPI cross-architecture development for ongoing AI innovation
As Intel expands its advanced AI product portfolio to meet diverse customer needs, it is also paving the way to simplify heterogeneous programming for developers with its oneAPI cross-architecture tools portfolio to accelerate performance and increase productivity.
With these advanced tools, developers can accelerate AI workloads across Intel CPUs, GPUs and FPGAs, and future-proof their code for today’s and the next generations of Intel processors and accelerators.
Enhanced Intel Select Solutions portfolio addresses IT’s top requirements
Intel has enhanced its Select Solutions portfolio to accelerate deployment of IT’s most urgent requirements highlighting the value of pre-verified solution delivery in today’s rapidly evolving business climate. Three new and five enhanced Intel Select Solutions focused on analytics, AI and hyper-converged infrastructure are announced.
The enhanced Intel Select Solution for Genomics Analytics is being used around the world to find a vaccine for COVID-19 and the new Intel Select Solution for VMware Horizon VDI on vSAN is being used to enhance remote learning.
When products are available
The 3rd Gen Intel Xeon Scalable processors and Intel Optane persistent memory 200 series are shipping to customers today. In May, Facebook announced that 3rd Gen Intel Xeon Scalable processors are the foundation for its newest Open Compute Platform (OCP) servers, and other leading CSPs, including Alibaba, Baidu and Tencent, have announced they are adopting the next-generation processors.
General OEM systems availability is expected in the second half of 2020. The Intel SSD D7-P5500 and P5600 3D NAND SSDs are available now. And the Intel Stratix 10 NX FPGA is expected to be available in the second half of 2020.
19 vulnerabilities – some of them allowing remote code execution – have been discovered in a TCP/IP stack/library used in hundreds of millions of IoT and OT devices deployed by organizations in a wide variety of industries and sectors.
“Affected vendors range from one-person boutique shops to Fortune 500 multinational corporations, including HP, Schneider Electric, Intel, Rockwell Automation, Caterpillar, Baxter, as well as many other major international vendors,” say the researchers who discovered the flaws.
About the vulnerable TCP/IP software library
The vulnerable library was developed by US-based Treck and a Japanese company named Elmic Systems (now Zuken Elmic) in the 1990s. At one point in time, the two companies parted ways and each continued developing a separate branch of the stack/library.
The one developed by Treck – Treck TCP/IP – is marketed in the U.S. and the other one, dubbed Kasago TCP/IP, is marketed by Zuken Elmic in Asia.
The library’s high reliability, performance, and configurability is what made it so popular and widely deployed.
“The [Treck TCP/IP] library could be used as-is, configured for a wide range of uses, or incorporated into a larger library. The user could buy the library in source code format and edit it extensively. It can be incorporated into the code and implanted into a wide range of device types,” the researchers explained.
“The original purchaser could decide to rebrand, or could be acquired by a different corporation, with the original library history lost in company archives. Over time, the original library component could become virtually unrecognizable. This is why, long after the original vulnerability was identified and patched, vulnerabilities may still remain in the field, since tracing the supply chain trail may be practically impossible.”
The vulnerabilities were discovered by Moshe Kol and Shlomi Oberman from JSOF in the Treck TCP/IP library, and Zuken Elmic confirmed that some of them affect the Kasago library.
About the vulnerabilities
Collectively dubbed Ripple20, the vulnerabilities (numbered CVE-2020-11896 through CVE-2020-11914) range from critical to low-risk. Four enable remote code execution. Others could be used to achieve sensitive information disclosure, (persistent) denial of service, and more.
“One of the critical vulnerabilities is in the DNS protocol and may potentially be exploitable by a sophisticated attacker over the internet, from outside the network boundaries, even on devices that are not connected to the internet,” the researchers noted.
“Most of the vulnerabilities are true zero-days, with 4 of them having been closed over the years as part of routine code changes, but remained open in some of the affected devices (3 lower severity, 1 higher). Many of the vulnerabilities have several variants due to the stack configurability and code changes over the years.”
The researchers plan to release technical reports on some of them and are scheduled to demonstrate exploitation of the DNS vulnerability on a Schneider Electric APC UPS device at Black Hat USA in August.
The Treck TCP/IP library did not receive much attention from security researchers in the past. After JSOF researchers decided to probe it and discovered the flaws, they also discovered that contacting the many, many vendors who implement it was going to be a time-consuming task.
Treck was made aware of the vulnerabilities and fixed them, but insisted on contacting clients and users of the code library themselves and to provide the appropriate patches directly.
But, since some of the vulnerabilities affect also the Kasago library, JSOF involved multiple national computer emergency response team (CERT) organizations and regulators in the disclosure process.
“CERT groups focus on ways to identify and mitigate security risks. For example, they can reach a much larger target group of potential users with blast announcements, ‘mass-mailings’ that they broadcast to a long list of participating companies to notify them of the potential vulnerability. Once users are identified, mitigation comes into play,” the researchers explained.
“While the best response might be to install the original Treck patch, there are many situations in which installing the original patch is not possible. CERTs work to develop alternative approaches that can be used to minimize or effectively eliminate the risk, even if patching is not an option.”
The Ripple20 vulnerabilities have been dubbed thusly because of extent of its impact.
“The wide-spread dissemination of the software library (and its internal vulnerabilities) was a natural consequence of the supply chain ‘ripple-effect’. A single vulnerable component, though it may be relatively small in and of itself, can ripple outward to impact a wide range of industries, applications, companies, and people,” they noted.
“The inclusion of the number ’20’ denotes our disclosure process beginning in 2020, while additionally symbolizing and giving deference to our belief in the potential for additional vulnerabilities to be found from the original 19,” they told Help Net Security.
The researchers have pointed out that the vulnerability disclosure process, their own efforts to identify users of the Treck library, and the patch/mitigation dissemination process have been immensely aided by Treck, various CERTs, the CISA, and several security vendors (Forescout, CyberMDX).
A number of vendors have confirmed that their offerings are affected by the Ripple20 flaws. JSOF has compiled a list of affected and non affected vendors, which will be constantly updated as additional information becomes available.
Device vendors should update the Treck library to a fixed version (184.108.40.206 or higher), while organizations should check their network for affected devices and contact the vendors for more information on how to mitigate the exploitation risk. The researchers will make available, upon request, a script to help companies identify Treck products on their networks.
“Fixing these vulnerabilities presents its own set of challenges, even once they’ve been identified on the network. Some already have patches available. But there are also complicating factors,” Forescout CEO and President Michael DeCesare noted.
“With these types of supply chain vulnerabilities and embedded components, the vendor that is creating the patch isn’t necessarily the one that will release it. That can delay the issuance of a patch. There are also no guarantees that the device vendor is still in business, or that they still support the device. The complex nature of the supply chain may also mean the device is not patchable at all, even if it needs to remain on the network. In such cases, mitigating controls such as segmentation will be needed to limit its risk.”
The various CERTs and agencies like CISA will surely offer mitigation advice via security advisories.
The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.
Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.
ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.
ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.
Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.
CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.
“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”
Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.
Those who do not remember the past
Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.
A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.
It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:
One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.
Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.
Built into virtually every hardware device, firmware is lower-level software that is programmed to ensure that hardware functions properly.
As software security has been significantly hardened over the past two decades, hackers have responded by moving down the stack to focus on firmware entry points. Firmware offers a target that basic security controls can’t access or scan as easily as software, while allowing them to persist and continue leveraging many of their tried and true attack techniques.
The industry has reacted to this shift in attackers’ focus by making advancements in firmware security solutions and best practices over the past decade. That said, many organizations are still suffering from firmware security blind spots that prevent them from adequately protecting systems and data.
This can be caused by a variety of factors, from simple platform misconfigurations or reluctance about installing new updates to a general lack of awareness about the imperative need for firmware security.
In short, many don’t know what firmware security hazards exist today. To help readers stay more informed, here are three firmware security blind spots every organization should consider addressing to improve its overall security stance:
1. Firmware security awareness
The security of firmware running on the devices we use every day has been a novel focus point for researchers across the security community. With multiple components running a variety of different firmware, it might be overwhelming to know where to start. A good first step is recognizing firmware as an asset in your organization’s threat model and establishing the security objectives towards confidentiality, integrity, and availability (CIA). Here are some examples of how CIA applies to firmware security:
- Confidentiality: There may be secrets in firmware that require protection. The BIOS password, for instance, might grant attackers authentication bypass if they were able to access firmware contents.
- Integrity: This means ensuring the firmware running on a system is the firmware intended to be running and hasn’t been corrupted or modified. Features such as secure boot and hardware roots of trust support the measurement and verification of the firmware you’re running.
- Availability: In most cases, ensuring devices have access to their firmware in order to operate normally is the top priority for an organization as far as firmware is concerned. A potential breach of this security objective would come in the form of a permanent denial of service (PDoS) attack, which would require manual re-flashing of system components (a sometimes costly and cumbersome solution).
The first step toward firmware security is awareness of its importance as an asset to an organization’s threat model, along with the definition of CIA objectives.
2. Firmware updates
The increase in low-level security research has led to an equivalent increase in findings and fixes provided by vendors, contributing to the gradual improvement of platform resilience. Vendors often work with researchers through their bug bounty programs, their in-house research teams, and with researchers presenting their work in conferences around the world, in order to conduct coordinated disclosure of firmware security vulnerabilities. The industry has come a long way enabling collaboration, enabling processes and accelerating response times towards a common goal: improving the overall health and resilience of computer systems.
The firmware update process can be complex and time consuming, and involves a variety of parties: researchers, device manufacturers, OEM’s, etc. For example, once UEFI’s EDK II source code has been updated with a new fix, vendors must adopt it and push the changes out to end customers. Vendors issue firmware updates for a variety of reasons, but some of the most important patches are designed explicitly to address newly discovered security vulnerabilities.
Regular firmware updates are vital to a strong security posture, but many organizations are hesitant to introduce new patches due to a range of factors. Whether it’s concerns over the potential time or cost involved, or fear of platform bricking potential, there are a variety of reasons why updates are left uninstalled. Delaying or forgoing available fixes, however, increases the amount of time your organization may be at risk.
A good example of this is WannaCry. Although Microsoft had previously released updates to address the exploit, the WannaCry ransomware wreaked havoc on hundreds of thousands of unpatched computers throughout the spring of 2017, affecting hundreds of countries and causing billions of dollars in damages. While this outbreak wasn’t the result of a firmware vulnerability specifically, it offers a stark illustration of what can happen when organizations choose not to apply patches for known threats.
Installing firmware updates regularly is arguably one of the most simple and powerful steps you can take toward better security today. Without them, your organization will be at greater risk of sustaining a security incident, unaware of fixes for known vulnerabilities.
If you’re concerned that installing firmware updates might inadvertently break your organization’s systems, consider conducting field tests on a small batch of systems before rolling them out company-wide and remember to always have a backup of the current image of your platform to revert back to as a precautionary measure. Be sure to establish a firmware update cadence that works for your organization in order to keep your systems up to date with current firmware protections at minimal risk.
3. Platform misconfigurations
Another issue that can cause firmware security risks is platform misconfigurations. Once powered on, a platform follows a complex set of steps to properly configure the computer for runtime operations. There are many time- and sequence-based elements and expectations for how firmware and hardware interact during this process, and security assumptions can be broken if the platform isn’t set up properly.
Disabled security features such as secure boot, VT-d, port protections (like Thunderbolt), execution prevention, and more are examples of potentially costly platform misconfigurations. All sorts of firmware security risks can arise if an engineer forgets a key configuration step or fails to properly configure one of the hundreds of bits involved.
Most platform misconfigurations are difficult to detect without automated security validation tools because different generations of platforms may have registers defined differently, there are a long list of things to check for, and there might be dependencies between the settings. It can quickly become cumbersome to keep track of proper platform configurations in a cumulative way.
Fortunately, tools like the Intel-led, open-source Chipsec project can scan for configuration anomalies within your platform and evaluate security-sensitive bits within your firmware to identify misconfigurations automatically. As a truly cumulative, open-source tool, Chipsec is updated regularly with the most recent threat insights so organizations everywhere can benefit from an ever-growing body of industry research. Chipsec also has the ability to automatically detect the platform being run in order to set register definitions. On top of scanning, it also offers several firmware security tools including fuzzing, manual testing, and forensic analysis.
Although there are a few solutions with the capability to inspect a systems’ configuration, running a Chipsec scan is a free and quick way to ensure a particular system’s settings are set to recommended values.
Your organization runs on numerous hardware devices, each with its own collection of firmware. As attackers continue to set their sights further down the stack in 2020 and beyond, firmware security will be an important focus for every organization. Ensure your organization properly prioritizes defenses for this growing threat vector, install firmware updates regularly, commit to continuously detect potential platform misconfigurations, and enable available security features and their respective policies in order to harden firmware resiliency towards confidentiality, integrity and availability.
Computer scientists at KU Leuven have once again exposed a security flaw in Intel processors. Jo Van Bulck, Frank Piessens, and their colleagues in Austria, the United States, and Australia gave the manufacturer one year’s time to fix the problem.
Load Value Injection
Plundervolt, Zombieload, Foreshadow: in the past couple of years, Intel has had to issue quite a few patches for vulnerabilities that computer scientists at KU Leuven have helped to expose. “All measures that Intel has taken so far to boost the security of its processors have been necessary, but they were not enough to ward off our new attack,” says Jo Van Bulck from the Department of Computer Science at KU Leuven.
Like the previous attacks, the new technique – dubbed Load Value Injection – targets the ‘vault’ of computer systems with Intel processors: SGX enclaves.
“To a certain extent, this attack picks up where our Foreshadow attack of 2018 left off. A particularly dangerous version of this attack exploited the vulnerability of SGX enclaves, so that the victim’s passwords, medical information, or other sensitive information was leaked to the attacker.
“Load Value Injection uses that same vulnerability, but in the opposite direction: the attacker’s data are smuggled – ‘injected’ – into a software program that the victim is running on their computer. Once that is done, the attacker can take over the entire program and acquire sensitive information, such as the victim’s fingerprints or passwords.”
Giving Intel enough time to fix the problem
The vulnerability was already discovered on 4 April 2019. Nevertheless, the researchers and Intel agreed to keep it a secret for almost a year. Responsible disclosure embargoes are not unusual when it comes to cybersecurity, although they usually lift after a shorter period of time.
“We wanted to give Intel enough time to fix the problem. In certain scenarios, the vulnerability we exposed is very dangerous and extremely difficult to deal with because, this time, the problem did not just pertain to the hardware: the solution also had to take software into account. Therefore, hardware updates like the ones issued to resolve the previous flaws were no longer enough. This is why we agreed upon an exceptionally long embargo period with the manufacturer.”
“Intel ended up taking extensive measures that force the developers of SGX enclave software to update their applications. However, Intel has notified them in time. End-users of the software have nothing to worry about: they only need to install the recommended updates.”
“Our findings show, however, that the measures taken by Intel make SGX enclave software up to 2 to even 19 times slower.”
What are SGX enclaves?
Computer systems are made up of different layers, making them very complex. Every layer also contains millions of lines of computer code. As this code is still written manually, the risk for errors is significant.
If such an error occurs, the entire computer system is left vulnerable to attacks. You can compare it to a skyscraper: if one of the floors becomes damaged, the entire building might collapse.
Viruses exploit such errors to gain access to sensitive or personal information on the computer, from holiday pictures and passwords to business secrets.
In order to protect their processors against this kind of intrusions, IT company Intel introduced an innovative technology in 2015: Intel Software Guard eXtensions (Intel SGX). This technology creates isolated environments in the computer’s memory, so-called enclaves, where data and programs can be used securely.
“If you look at a computer system as a skyscraper, the enclaves form a vault”, researcher Jo Van Bulck explains. “Even when the building collapses the vault should still guard its secrets – including passwords or medical data.”
The technology seemed watertight until August 2018, when researchers at KU Leuven discovered a breach. Their attack was dubbed Foreshadow. In 2019, the Plundervolt attack revealed another vulnerability. Intel has released updates to resolves both flaws.
Radisys delivers its Engage AI-based media apps on OpenNESS to accelerate 4G and 5G networks innovation
Radisys, a global leader of open telecom solutions, announced the deployment of the Radisys Engage portfolio of digital engagement and AI-based real-time media applications on Open Network Edge Services Software (OpenNESS), an open source multi-access edge compute (MEC) platform initiative led by Intel to accelerate innovation and unique experiences on 4G/LTE and 5G networks.
The advent of 5G and massive IoT applications require ultra-low latency, high-bandwidth, and real-time access to radio network resources, leading to the rise of multi-access edge computing to enable virtualized applications to be deployed on compute resources closest to the edge.
However, the lack of broad industry standardization for MEC has led to fragmentation in the development and deployment of MEC platforms, thereby hindering wide-scale adoption.
The OpenNESS platform abstracts complex networking technology and provides microservices/APIs resulting in an easy-to-use toolkit to develop and deploy applications at the network edge.
Radisys’ Engage advanced real-time media applications are available on the OpenNESS platform, enabling new digital experiences.
- Radisys’ programmable computer vision and analytics applications require ultra-low latency and high-bandwidth consumption for software processing of live video streams to “see” in real-time, enabling enhanced security, IoT, remote monitoring, immersive communication applications and more.
- Radisys’ AR, VR, 360 video and speech recognition capabilities with real-time media analytics applications on a distributed edge compute infrastructure are enabling a new generation of 4G and 5G monetizable services.
- Radisys’ in-network biometric authentication enhances secure access to applications and remote locations.
“We are pleased to deliver a complete and open MEC platform that comes with ready to deploy edge media applications,” said Adnan Saleem, CTO, Software and Cloud Solutions, Radisys.
“Through collaboration with Intel, our solution will help service providers to realize the ultra-low latency benefits of 5G, while enabling rich new applications like augmented reality, localized collaboration, improved security, and more.”
“Intel is collaborating with the Network Builders ecosystem to deliver open solutions that enable service providers to accelerate innovation while controlling complexity and costs,” said Renu Navale, Vice President & General Manager, Edge Computing and Ecosystem Enabling at Intel.
“By adopting and integrating OpenNESS – Intel’s open source software for network edge, Radisys’ Engage media-centric applications provide the industry with a unique platform for real-time media applications and services.”
We’ve all shared the frustration when it comes to errors – software updates that are intended to make our applications run faster inadvertently end up doing just the opposite. These bugs, dubbed in the computer science field as performance regressions, are time-consuming to fix since locating software errors normally requires substantial human intervention.
Schematic illustrating how Muzahid’s deep learning algorithm works. The algorithm is ready for anomaly detection after it is first trained on performance counter data from a bug-free version of a program.
To overcome this obstacle, researchers at Texas A&M University, in collaboration with computer scientists at Intel Labs, have now developed a complete automated way of identifying the source of errors caused by software updates.
The deep learning algorithm
Their algorithm, based on a specialized form of machine learning called deep learning, is not only turnkey, but also quick, finding performance bugs in a matter of a few hours instead of days.
“Updating software can sometimes turn on you when errors creep in and cause slowdowns. This problem is even more exaggerated for companies that use large-scale software systems that are continuously evolving,” said Dr. Abdullah Muzahid, assistant professor in the Department of Computer Science and Engineering.
“We have designed a convenient tool for diagnosing performance regressions that is compatible with a whole range of software and programming languages, expanding its usefulness tremendously.”
How does it work?
To pinpoint the source of errors within a software, debuggers often check the status of performance counters within the central processing unit. These counters are lines of code that monitor how the program is being executed on the computer’s hardware in the memory, for example.
So, when the software runs, counters keep track of the number of times it accesses certain memory locations, the time it stays there and when it exits, among other things. Hence, when the software’s behavior goes awry, counters are again used for diagnostics.
“Performance counters give an idea of the execution health of the program,” said Muzahid. “So, if some program is not running as it is supposed to, these counters will usually have the telltale sign of anomalous behavior.”
However, newer desktops and servers have hundreds of performance counters, making it virtually impossible to keep track of all of their statuses manually and then look for aberrant patterns that are indicative of a performance error. That is where Muzahid’s machine learning comes in.
By using deep learning, the researchers were able to monitor data coming from a large number of the counters simultaneously by reducing the size of the data, which is similar to compressing a high-resolution image to a fraction of its original size by changing its format. In the lower dimensional data, their algorithm could then look for patterns that deviate from normal.
The versatility of the algorithm
When their algorithm was ready, the researchers tested if it could find and diagnose a performance bug in a commercially available data management software used by companies to keep track of their numbers and figures. First, they trained their algorithm to recognize normal counter data by running an older, glitch-free version of the data management software.
Next, they ran their algorithm on an updated version of the software with the performance regression. They found that their algorithm located and diagnosed the bug within a few hours. Muzahid said this type of analysis could take a considerable amount of time if done manually.
In addition to diagnosing performance regressions in software, Muzahid noted that their deep learning algorithm has potential uses in other areas of research as well, such as developing the technology needed for autonomous driving.
“The basic idea is once again the same, that is being able to detect an anomalous pattern,” said Muzahid. “Self-driving cars must be able to detect whether a car or a human is in front of it and then act accordingly. So, it’s again a form of anomaly detection and the good news is that is what our algorithm is already designed to do.”
As modern computer systems become more complex and interconnected, we are seeing more vulnerabilities than ever before. As attacks become more pervasive and sophisticated, they are often progressing past the software layer and compromising hardware. As a response, the industry has been working to deliver microarchitectural improvements and today, implementing hardware-based security is widely recognized as a best practice.
However, hardware-based security has its own set of challenges when not designed, implemented or verified properly. Combined with the fact that we are seeing increasingly sophisticated methods to exploit hardware by chaining them together with software vulnerabilities, it’s evident that the industry needs a better and more in-depth understanding of the common hardware security vulnerabilities taxonomy, including information on how these vulnerabilities get introduced into products, how they can be exploited, their associated risks, as well as best practices to prevent and identify them early on in the product development lifecycle.
Today, a key resource for tracking software vulnerabilities exists in MITRE’s Common Weakness Enumeration (CWE) system, which is also complemented by the Common Vulnerability and Exposures (CVE) system.
A simple way to differentiate the two is that CWE includes a taxonomy of common security vulnerability types and provides different views for a user to traverse different categorical buckets, whereas the CVE maintains the list of specific vulnerability instances that have already been found and reported publicly. Multiple CVEs are usually mapped to specific CWEs.
Essentially, the two systems work hand-in-hand to provide the ultimate vulnerability reference guide. These resources aim to educate architects and developers to identify potential mistakes when designing and developing software products. At the same time, they enable security researchers and tool vendors to pinpoint current gaps, so they can offer better tools and methodologies to automate the detection of common software security issues.
With the growing awareness of hardware vulnerabilities, the CWE could be enhanced to include relevant entry points, common consequences, examples, countermeasures and detection methods from the specific hardware perspective. Furthermore, there are hardware-centric weaknesses that are related to the physical properties of hardware devices (e.g., temperature, voltage glitches, current, wear out, interference, and more) which the CWE does not yet categorize.
Due to these missing reference materials for hardware vulnerabilities in the CWE, researchers do not have the same standard taxonomy that would enable them to share information and techniques with one another. If we expect hardware vendors and their partners to collectively deliver more secure solutions, we must have a common language for discussing hardware security vulnerabilities.
Over the past few years, Intel researchers have been active in raising public awareness on common hardware security vulnerabilities (through academia, at conferences, and even with the industry’s first hardware capture-the-flag competition). But more can always be done. Here are six ways the industry would benefit from a standardized Hardware CWE:
1. Product architects and designers could gain a deeper understanding of the common hardware security pitfalls, allowing them to potentially avoid repeat mistakes when creating solutions.
2. Verification engineers could be more fluent in the commonly made security mistakes and how they can be effectively detected at various stages of the product development lifecycle. This would enable them to devise proper verification plan and test strategies for improving the security robustness of products.
3. Security architects and researchers could better focus their energy on systemic issues and work to identify effective mitigations that help eliminate risks and/or make exploitation much more difficult for attackers.
4. Electronic Design Automation (EDA) vendors could prioritize and expand their verification tool features and offerings. This could improve the effectiveness of their tools in guiding users to avoid the introduction of common vulnerabilities. It could also provide a common platform for EDA tool users to compare and benchmark the capabilities of different tool options, enabling them to identify the right ones that meet their specific needs.
5. Educators could develop training materials and best practices that focus on the most relevant areas of concern, so university curriculum and corporate trainings could help audiences gain the necessary skills they need.
6. Security researchers could leverage a common taxonomy to communicate without ambiguities, facilitating learning exchange, systematic study and collaboration. And a public database would also make the research field more accessible for aspiring researchers.
As our industry moves forward to combat the latest threats, it is vital that we invest in research, tooling and the proper resources to catalog and evaluate both software and hardware vulnerabilities.
Today, categorizing hardware vulnerabilities, root causes, and mitigation strategies often feels like an uphill battle. As hardware vulnerabilities continue to get more complex and challenging for the industry, creating a common taxonomy for discussing, documenting and sharing hardware-based threats becomes paramount.
Let’s work together as an industry to ensure that we are speaking the same language when it comes to researching and mitigating the hardware vulnerabilities of the future.
- Arun Kanuparthi, Offensive Security Researcher, Intel
- Hareesh Khattri, Offensive Security Researcher, Intel
Ping An Insurance announced that Ping An Technology and Intel signed a strategic collaboration agreement in Shenzhen, China. The two companies plan to establish a joint laboratory, cooperate on products and technology, and form a joint project team in areas of high-performance computing, including storage, network, cloud, artificial intelligence (AI) and security.
The signing ceremony included Ping An Technology leaders Ericson Chan, CEO; William Fang, Chief Technology Officer; and Huang Wei, Strategic Partner and General Manager of Ecosystems; and Intel leaders Rose Schooler, Corporate Vice President, Sales and Marketing Group; General Manager, Data Center Group, Wang Rui, Vice President, Sales and Marketing Group; PRC Country Manager, and Liang Yali, Intel Biz Consumption Group, General Manager, PRC.
Mr. Chan said, “Partnering with Intel will give Ping An an edge to boost our cloud technologies and to supercharge our AI-based services and solutions. We will further strengthen our data protection with Intel hardware-enabled security in finance and healthcare, two areas where it is so critical.”
Ms. Schooler said, “The two parties will explore joint development in technology areas including AI, high performance computing, visual computing and FPGAs using the full range of Intel’s data-centric portfolio. We plan to innovate and support an open ecosystem Ping An Technology’s Ping An Cloud.”
Backed by the strong financial background of Ping An and the world-class computer technology of Intel, Ping An Technology develops and applies technologies in a wide range of scenarios to support five ecosystems: financial services, health care, auto services, real estate services, and smart city services.
Ping An Cloud has yielded numerous innovative results with Intel on the development of financial private and public cloud. Both parties will continue to generate more competitive products and services together, in order to increase the effect of “minimizing costs and maximizing efficiency” in technology innovation.
Intel Labs unveiled what is believed to be a first-of-its-kind cryogenic control chip — code-named “Horse Ridge” — that will speed up development of full-stack quantum computing systems. Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems — a major milestone on the path to quantum practicality.
Developed together with Intel’s research collaborators at QuTech, a partnership between TU Delft and TNO (Netherlands Organization for Applied Scientific Research), Horse Ridge is fabricated using Intel’s 22nm FinFET technology.
In-house fabrication of these control chips at Intel will dramatically accelerate the company’s ability to design, test and optimize a commercially viable quantum computer.
“While there has been a lot of emphasis on the qubits themselves, the ability to control many qubits at the same time had been a challenge for the industry. Intel recognized that quantum controls were an essential piece of the puzzle we needed to solve in order to develop a large-scale commercial quantum system.
“That’s why we are investing in quantum error correction and controls. With Horse Ridge, Intel has developed a scalable control system that will allow us to significantly speed up testing and realize the potential of quantum computing.” – Jim Clarke, Intel’s director of Quantum Hardware.
Why it matters
In the race to realize the power and potential of quantum computers, researchers have focused extensively on qubit fabrication, building test chips that demonstrate the exponential power of a small number of qubits operating in superposition.
However, in early quantum hardware developments — including design, testing and characterization of Intel’s silicon spin qubit and superconducting qubit systems — Intel identified a major bottleneck toward realizing commercial-scale quantum computing: interconnects and control electronics.
With Horse Ridge, Intel introduces an elegant solution that will enable the company to control multiple qubits and set a clear path toward scaling future systems to larger qubit counts — a major milestone on the path to quantum practicality.
What quantum practicality is
Quantum computers promise the potential to tackle problems that conventional computers can’t handle by leveraging a phenomena of quantum physics that allows qubits to exist in multiple states simultaneously. As a result, qubits can conduct a large number of calculations at the same time — dramatically speeding up complex problem-solving.
The quantum research community is still at mile one of a marathon toward demonstrating quantum practicality, a benchmark against which the quantum research community can determine whether a quantum system can deliver game-changing performance to solve real-world problems.
Intel´s investment in quantum computing covers the full hardware and software stack in pursuit of the development and commercialization of a practical, commercially viable quantum system.
Why Horse Ridge is important
To date, researchers have been focused on building small-scale quantum systems to demonstrate the potential of quantum devices. In these efforts, researchers have relied on existing electronic tools and high-performance computing rack-scale instruments to connect the quantum system inside the cryogenic refrigerator to the traditional computational devices regulating qubit performance and programming the system.
These devices are often custom-designed to control individual qubits, requiring hundreds of connective wires into and out of the refrigerator in order to control the quantum processor.
This extensive control cabling for each qubit will hinder the ability to scale the quantum system to the hundreds or thousands of qubits required to demonstrate quantum practicality, not to mention the millions of qubits required for a commercially viable quantum solution.
With Horse Ridge, Intel radically simplifies the control electronics required to operate a quantum system. Replacing these bulky instruments with a highly-integrated system-on-chip (SoC) will simplify system design and allow for sophisticated signal processing techniques to accelerate set-up time, improve qubit performance and enable the system to efficiently scale to larger qubit counts.
More about Horse Ridge
Horse Ridge is a highly integrated, mixed-signal SoC that brings the qubit controls into the quantum refrigerator — as close as possible to the qubits themselves. It effectively reduces the complexity of quantum control engineering from hundreds of cables running into and out of a refrigerator to a single, unified package operating near the quantum device.
Designed to act as a radio frequency (RF) processor to control the qubits operating in the refrigerator, Horse Ridge is programmed with instructions that correspond to basic qubit operations. It translates those instructions into electromagnetic microwave pulses that can manipulate the state of the qubits.
Named for one of the coldest regions in Oregon, the Horse Ridge control chip was designed to operate at cryogenic temperatures — approximately 4 Kelvin. To put this in context, 4 Kelvin is only warmer than absolute zero — a temperature so cold that atoms nearly stop moving.
This feat is particularly exciting as Intel progresses its research into silicon spin qubits, which have the potential to operate at slightly higher temperatures than current quantum systems require.
Today, a quantum computer operates at in the millikelvin range— just a fraction of a degree above absolute zero. But silicon spin qubits have properties that could allow them to operate at 1 Kelvin or higher temperatures, which would dramatically reduce the challenges of refrigerating the quantum system.
As research progresses, Intel aims to have cryogenic controls and silicon spin qubits operate at the same temperature level. This will enable the company to leverage its expertise in advanced packaging and interconnect technologies to create a solution with the qubits and controls in one streamlined package.
To counter the growing sophistication of computer attacks, Intel and other chipmakers have built digital vaults into CPUs to segregate sensitive computations and secrets from the main engine computers use. Now, scientists have devised an attack that causes the Software Guard Extensions—Intel’s implementation of this secure CPU environment—to divulge cryptographic keys and induce potentially dangerous memory errors.
Plundervault, as the attack has been dubbed, starts with the assumption that an attacker is able to run privileged software on a targeted computer. While that’s a lofty prerequisite, it’s precisely the scenario Intel’s SGX feature is designed to protect against. The chipmaker bills SGX as a private region that uses hardware-based memory encryption to isolate sensitive computations and data from malicious processes that run with high privilege levels. Intel goes as far as saying that “Only Intel SGX offers such a granular level of control and protection.”
But it turns out that subtle fluctuations in voltage powering the main CPU can corrupt the normal functioning inside the SGX. By subtly increasing or decreasing the current delivered to a CPU—operations known as “overvolting” and “undervolting”—a team of scientists has figured out how to induce SGX faults that leak cryptographic keys, break integrity assurances, and potentially induce memory errors that could be used in other types of attacks. While the exploit requires the execution of privileged code, it doesn’t rely on physical access, raising the possibility of remote attacks.
The breakthrough leading to these attacks was the scientists’ ability to use previous research into the undocumented model-specific register inside the x86 instruction set to abuse the dynamic voltage scaling interface that controls the amount of voltage used by a CPU. Also noteworthy is surgically controlling the voltage in a way that introduces specific types of attacks.
In a paper published on Tuesday, the scientists wrote:
In this paper, we present Plundervolt, a novel attack against Intel SGX to reliably corrupt enclave computations by abusing privileged dynamic-voltage-scaling interfaces. Our work builds on reverse engineering efforts that revealed which ModelSpecific Registers (MSRs) are used to control the dynamic voltage scaling from software [64, 57, 49]. The respective MSRs exist on all Intel Core processors. Using this interface to very briefly decrease the CPU voltage during a computation in a victim SGX enclave, we show that a privileged adversary is able to inject faults into protected enclave computations. Crucially, since the faults happen within the processor package, i.e., before the results are committed to memory, Intel SGX’s memory integrity protection fails to defend against our attacks. To the best of our knowledge, we are the first to
practically showcase an attack that directly breaches SGX’s integrity guarantees. In summary, our main contributions are:
1) We present Plundervolt, a novel software-based fault attack on Intel Core x86 processors. For the first time,
we bypass Intel SGX’s integrity guarantees by directly injecting faults within the processor package.
2) We demonstrate the effectiveness of our attacks by injecting faults into Intel’s RSA-CRT and AES-NI implementations running in an SGX enclave, and we reconstruct full cryptographic keys with negligible computational efforts.
3) We explore the use of Plundervolt to induce memory safety errors into bug-free enclave code. Through various case studies, we show how in-enclave pointers can be redirected into untrusted memory and how Plundervolt may cause heap overflows in widespread SGX runtimes.
4) Finally, we discuss countermeasures and why fully mitigating Plundervolt may be challenging in practice.
The researchers privately reported the vulnerability to Intel ahead of Tuesday’s publication. In response, Intel has released a microcode and BIOS updates that mitigate attacks by locking voltage to the default settings. Readers using Intel Core processors from Skylake onward and some platforms based on Xeon E should install INTEL-SA-00289 once it becomes available from respective computer makers. The vulnerability is tracked as CVE-2019-11157.
GitHub, the world’s largest open source code repository and leading software development platform, has launched GitHub Security Lab. “Our team will lead by example, dedicating full-time resources to finding and reporting vulnerabilities in critical open source projects,” said Jamie Cool, VP of Product Management, Security at GitHub. GitHub Security Lab GitHub Security Lab is a program aimed at researchers, maintainers, and companies that want to contribute to the overall security of open source software. Current … More
The post GitHub Security Lab aims to make open source software more secure appeared first on Help Net Security.
Intel’s Patch Tuesday releases are rarely so salient as those pushed out this month: the semiconductor chip manufacturer has patched a slew of high-profile vulnerabilities in their chips and drivers. TPM-FAIL TPM-FAIL is a name given to vulnerabilities found in some Intel’s firmware-based TPM (fTPM) and STMicroelectronics’ TPM chipsets, discovered by Ahmad “Daniel” Moghimi and Berk Sunar from Worcester Polytechnic Institute, Thomas Eisenbarth from University of Lübeck and Nadia Heninger from University of California at … More
The post Intel releases updates to plug TPM-FAIL flaws, foil ZombieLoad v2 attacks appeared first on Help Net Security.
First disclosed in January 2018, the Meltdown and Spectre attacks have opened the floodgates, leading to extensive research into the speculative execution hardware found in modern processors, and a number of additional attacks have been published in the months since.
Today sees the publication of a range of closely related flaws named variously RIDL, Fallout, ZombieLoad, or Microarchitectural Data Sampling. The many names are a consequence of the several groups that discovered the different flaws. From the computer science department of Vrije Universiteit Amsterdam and Helmholtz Center for Information Security, we have “Rogue In-Flight Data Load.” From a team spanning Graz University of Technology, the University of Michigan, Worcester Polytechnic Institute, and KU Leuven, we have “Fallout.” From Graz University of Technology, Worcester Polytechnic Institute, and KU Leuven, we have “ZombieLoad,” and from Graz University of Technology, we have “Store-to-Leak Forwarding.”
Intel is using the name “Microarchitectural Data Sampling” (MDS), and that’s the name that arguably gives the most insight into the problem. The issues were independently discovered by both Intel and the various other groups, with the first notification to the chip company occurring in June last year.
A recap: Processors guess a lot
All of the attacks follow a common set of principles. Each processor has an architectural behavior (the documented behavior that describes how the instructions work and that programmers depend on to write their programs) and a microarchitectural behavior (the way an actual implementation of the architecture behaves). These can diverge in subtle ways. For example, architecturally, a processor performs each instruction sequentially, one by one, waiting for all the operands of an instruction to be known before executing that instruction. A program that loads a value from a particular address in memory will wait until the address is known before trying to perform the load and then wait for the load to finish before using the value.
Microarchitecturally, however, the processor might try to speculatively guess at the address so that it can start loading the value from memory (which is slow) or it might guess that the load will retrieve a particular value. It will typically use a value from the cache or translation lookaside buffer to form this guess. If the processor guesses wrong, it will ignore the guessed-at value and perform the load again, this time with the correct address. The architecturally defined behavior is thus preserved, as if the processor always waited for values before using them.
But that faulty guess will disturb other parts of the processor; the main approach is to modify the cache in a way that depends on the guessed value. This modification causes subtle timing differences (because it’s faster to read data that’s already in cache than data that isn’t) that an attacker can measure. From these measurements, the attacker can infer the guessed value, which is to say that the attacker can infer the value that was in cache. That value can be sensitive and of value to the attacker.
MDS is broadly similar, but instead of leaking values from cache, it leaks values from various buffers within the processor. The processor has a number of specialized buffers that it uses for moving data around internally. For example, line fill buffers (LFB) are used to load data into the level 1 cache. When the processor reads from main memory, it first checks the level 1 data cache to see if it already knows the value. If it doesn’t, it sends a request to main memory to retrieve the value. That value is placed into an LFB before being written to the cache. Similarly, when writing values to main memory, they’re placed temporarily in store buffers. Through a process called store-to-load forwarding, the store buffer can also be used to service memory reads. And finally, there are structures called load ports, which are used to copy data from memory to a register.
All three buffers can hold stale data: a line fill buffer will hold data from a previous fetch from main memory while waiting for the new fetch to finish; a store buffer can contain a mix of data from different store operations (and hence, can forward a mix of new and old data to a load buffer); and a load port similarly can contain old data while waiting for the new data from memory.
Just as the previous speculative execution attacks would use a stale value in cache, the new MDS attacks perform speculation based on a stale value from one of these buffers. All three of the buffer types can be used in such attacks, with the exact buffer depending on the precise attack code.
The “sampling” in the name is because of the complexities of this kind of attack. The attacker has very little control over what’s in these buffers. The store buffer, for example, can contain stale data from different store operations, so while some of it might be of interest to an attacker, it can be mixed with other irrelevant data. To get usable data, many, many attempts have to be made at leaking information, so it must be sampled many times.
On the other hand, the attacks, like the Meltdown and Foreshadow attacks, bypass the processor’s internal security domains. For example, a user mode process can see data leaked from the kernel, or an insecure process can see data leaked from inside a secure SGX enclave. As with previous similar attacks, the use of hyperthreading, where both an attacker thread and a victim thread run on the same physical core, can increase the ease of exploitation.
Generally, an attacker has little or no control over these buffers; there’s no easy way to force the buffers to contain sensitive information, so there’s no guarantee that the leaked data will be useful. The VU Amsterdam researchers have shown a proof-of-concept attack wherein a browser is able to read the shadowed password file of a Linux system. However, to make this attack work, the victim system is made to run the passwd command over and over, ensuring that there’s a high probability that the contents of the file will be in one of the buffers. Intel accordingly believes the attacks to be low or medium risk.
That doesn’t mean that they’ve gone unfixed, however. Today a microcode update for Sandy Bridge through first-generation Coffee Lake and Whiskey Lake chips will ship. In conjunction with suitable software support, operating systems will be able to forcibly flush the various buffers to ensure that they’re devoid of sensitive data. First-generation Coffee Lake and Whiskey Lake processors are already immune to MDS using the load fill buffers, as this happened to be fixed as part of the remediation for the level 1 terminal fault and Meltdown attacks. Moreover, the very latest Coffee Lake, Whiskey Lake, and Cascade Lake processors include complete hardware fixes for all three variants.
For systems dependent on microcode fixes, Intel says that the performance overhead will typically be under three percent but, under certain unfavorable workloads, could be somewhat higher. The company has also offered an official statement:
Microarchitectural Data Sampling (MDS) is already addressed at the hardware level in many of our recent 8th and 9th Generation Intel® Core™ processors, as well as the 2nd Generation Intel® Xeon® Scalable Processor Family. For other affected products, mitigation is available through microcode updates, coupled with corresponding updates to operating system and hypervisor software that are available starting today. We’ve provided more information on our website and continue to encourage everyone to keep their systems up to date, as it’s one of the best ways to stay protected. We’d like to extend our thanks to the researchers who worked with us and our industry partners for their contributions to the coordinated disclosure of these issues.
Like Meltdown, this issue does appear to be Intel-specific. The use of stale data from the buffers to perform speculative execution lies somewhere between a performance improvement and an ease-of-implementation issue, and neither AMD’s chips nor ARM’s designs are believed to suffer the same problem. Architecturally, the Intel processors all do the right thing—they do trap and roll back faulty speculations, as they should, as if the bad data was never used—but as Meltdown and Spectre have made very clear, that’s not enough to ensure the processor operates safely.
Listing image by Marina Minkin