Intel introduced Intel RealSense ID, an on-device solution that combines an active depth sensor with a specialized neural network designed to deliver secure, accurate and user-aware facial authentication. It works with smart locks, access control, point-of-sale, ATMs, kiosks and more. “Intel RealSense ID combines purpose-built hardware and software with a dedicated neural network designed to deliver a secure facial authentication platform that users can trust,” said Sagi Ben Moshe, Intel corporate vice president and general … More
The post Intel RealSense ID: Facial authentication designed with privacy as a priority appeared first on Help Net Security.
Just as ICS-CERT published a new advisory detailing four new vulnerabilities in the Treck TCP/IP stack, Forescout released an open-source tool for detecting whether a network device runs one of the four open-source TCP/IP stacks (and their variations) affected by the Amnesia:33 vulnerabilities. New vulnerabilities in the Treck TCP/IP stack Reported by Intel researchers and confirmed by Treck Inc., four newly discovered vulnerabilities affect Treck TCP/IP stack Version 188.8.131.52 and prior: Of those, CVE-2020-25066 is … More
The post Script for detecting vulnerable TCP/IP stacks released appeared first on Help Net Security.
Virtualization has brought a dramatic level of growth and advancement to technology and business over the years. It transforms physical infrastructure into dedicated, partitioned virtual machines (VM) that deliver critical cloud applications and services to multiple customer organizations using the same hardware. While one server would previously be tasked with one OS install, today’s servers can host multiple instances of Windows or Linux running concurrently to increase system utilization. Client virtualization is the next step … More
The post 5 reasons IT should consider client virtualization appeared first on Help Net Security.
Intel unveiled Horse Ridge II, its second-generation cryogenic control chip, marking another milestone in the company’s progress toward overcoming scalability, one of quantum computing’s biggest hurdles.
Building on innovations in the first-generation Horse Ridge controller introduced in 2019, Horse Ridge II supports enhanced capabilities and higher levels of integration for elegant control of the quantum system. New features include the ability to manipulate and read qubit states and control the potential of several gates required to entangle multiple qubits.
“With Horse Ridge II, Intel continues to lead innovation in the field of quantum cryogenic controls, drawing from our deep interdisciplinary expertise bench across the Integrated Circuit design, Labs and Technology Development teams.
“We believe that increasing the number of qubits without addressing the resulting wiring complexities is akin to owning a sports car, but constantly being stuck in traffic.
“Horse Ridge II further streamlines quantum circuit controls, and we expect this progress to deliver increased fidelity and decreased power output, bringing us one step closer toward the development of a ‘traffic-free’ integrated quantum circuit,” said Jim Clarke, Intel director of Quantum Hardware, Components Research Group, Intel
Why it matters
Today’s early quantum systems use room-temperature electronics with many coaxial cables that are routed to the qubit chip inside a dilution refrigerator. This approach does not scale to a large number of qubits due to form factor, cost, power consumption and thermal load to the fridge.
With the original Horse Ridge, Intel took the first step toward addressing this challenge by radically simplifying the need for multiple racks of equipment and thousands of wires running into and out of the refrigerator in order to operate the quantum machine.
Intel replaced these bulky instruments with a highly integrated system-on-chip (SoC) that simplifies system design and uses sophisticated signal processing techniques to accelerate setup time, improve qubit performance and enable the engineering team to efficiently scale the quantum system to larger qubit counts.
About the new features
Horse Ridge II builds on the first-generation SoC’s ability to generate radio frequency pulses to manipulate the state of the qubit, known as qubit drive. It introduces two additional control features, paving the way for further integration of external electronic controls into the SoC operating inside the cryogenic refrigerator.
New features enable:
- Qubit readout: The function grants the ability to read the current qubit state. The readout is significant, as it allows for on-chip, low-latency qubit state detection without storing large amounts of data, thus saving memory and power.
- Multigate pulsing: The ability to simultaneously control the potential of many qubit gates is fundamental for effective qubit readouts and the entanglement and operation of multiple qubits, paving the path toward a more scalable system.
The addition of a programmable microcontroller operating within the integrated circuit enables Horse Ridge II to deliver higher levels of flexibility and sophisticated controls in how the three control functions are executed.
The microcontroller uses digital signal processing techniques to perform additional filtering on pulses, helping to reduce crosstalk between qubits.
Horse Ridge II is implemented using Intel 22nm low-power FinFET technology (22FFL) and its functionality has been verified at 4 kelvins. Today, a quantum computer operates in the millikelvin range – just a fraction of a degree above absolute zero.
But silicon spin qubits – the underpinning of Intel’s quantum efforts – have properties that could allow them to operate at temperatures of 1 kelvin or higher, which would significantly reduce the challenges of refrigerating the quantum system.
Intel’s cryogenic control research focuses on achieving the same operational temperature level for both the controls and silicon spin qubits.
Ongoing advances in this area, as demonstrated in Horse Ridge II, represent progress over today’s brute force approaches to scaling quantum interconnects and are a critical element of the company’s longer-term quantum practicality vision.
Intel unveiled ControlFlag – a machine programming research system that can autonomously detect errors in code. Even in its infancy, this self-supervised system shows promise as a productivity tool to assist software developers with the labor-intensive task of debugging.
In preliminary tests, ControlFlag trained and learned novel defects on over 1 billion unlabeled lines of production-quality code.
ControlFlag and debugging
In a world increasingly run by software, developers continue to spend a disproportionate amount of time fixing bugs rather than coding. It’s estimated that of the $1.25 trillion that software development costs the IT industry every year, 50 percent is spent debugging code.
Debugging is expected to take an even bigger toll on developers and the industry at large. As we progress into an era of heterogenous architectures — one defined by a mix of purpose-built processors to manage the massive sea of data available today — the software required to manage these systems becomes increasingly complex, creating a higher likelihood for bugs. In addition, it is becoming difficult to find software programmers who have the expertise to correctly, efficiently and securely program across diverse hardware, which introduces another opportunity for new and harder-to-spot errors in code.
When fully realized, ControlFlag could help alleviate this challenge by automating the tedious parts of software development, such as testing, monitoring and debugging. This would not only enable developers to do their jobs more efficiently and free up more time for creativity, but it would also address one of the biggest price tags in software development today.
“We think ControlFlag is a powerful new tool that could dramatically reduce the time and money required to evaluate and debug code. According to studies, software developers spend approximately 50% of the time debugging. With ControlFlag, and systems like it, I imagine a world where programmers spend notably less time debugging and more time on what I believe human programmers do best — expressing creative, new ideas to machines,” said Justin Gottschlich, principal scientist and director/founder of Machine Programming Research at Intel Labs.
How ControlFlag works
ControlFlag’s bug detection capabilities are enabled by machine programming, a fusion of machine learning, formal methods, programming languages, compilers and computer systems.
ControlFlag specifically operates through a capability known as anomaly detection. As humans existing in the natural world, there are certain patterns we learn to consider “normal” through observation. Similarly, ControlFlag learns from verified examples to detect normal coding patterns, identifying anomalies in code that are likely to cause a bug. Moreover, ControlFlag can detect these anomalies regardless of programming language.
A key benefit of ControlFlag’s unsupervised approach to pattern recognition is that it can intrinsically learn to adapt to a developer’s style. With limited inputs for the control tools that the program should be evaluating, ControlFlag can identify stylistic variations in programming language, similar to the way that readers recognize the differences between full words or using contractions in English.
The tool learns to identify and tag these stylistic choices and can customize error identification and solution recommendations based on its insights, which minimizes ControlFlag’s characterizations of code in error that may simply be a stylistic deviation between two developer teams.
Intel has even started evaluating using ControlFlag internally to identify bugs in its own software and firmware product development. It is a key element of Intel’s Rapid Analysis for Developers project, which aims to accelerate velocity by providing expert assistance.
Researchers at the University of Birmingham have managed to break Intel SGX, a set of security functions used by Intel processors, by creating a $30 device to control CPU voltage.
Break Intel SGX
The work follows a 2019 project, in which an international team of researchers demonstrated how to break Intel’s security guarantees using software undervolting. This attack, called Plundervolt, used undervolting to induce faults and recover secrets from Intel’s secure enclaves.
Intel fixed this vulnerability in late 2019 by removing the ability to undervolt from software with microcode and BIOS updates.
Taking advantage of a separate voltage regulator chip
But now, a team in the University’s School of Computer Science has created a $30 device, called VoltPillager, to control the CPU’s voltage – thus side-stepping Intel’s fix. The attack requires physical access to the computer hardware – which is a relevant threat for SGX enclaves that are often assumed to protect against a malicious cloud operator.
The bill of materials for building VoltPillager is:
- Teensy 4.0 Development Board: $22
- Bus Driver/ Buffer * 2: $1
- SOT IC Adapter * 2: $13 for 6
How to build Voltpillager Board
This research takes advantage of the fact that there is a separate voltage regulator chip to control the CPU voltage. VoltPillager connects to this unprotected interface and precisely controls the voltage. The research show that this hardware undervolting can achieve the same (and more) as Plundervolt.
Zitai Chen, a PhD student in Computer Security at the University of Birmingham, says: “This weakness allows an attacker, if they have control of the hardware, to breach SGX security. Perhaps it might now be time to rethink the threat model of SGX. Can it really protect against malicious insiders or cloud providers?”
Intel unveiled the suite of new security features for the upcoming 3rd generation Intel Xeon Scalable platform, code-named “Ice Lake.”
Intel is doubling down on its Security First Pledge, bringing its pioneering and proven Intel Software Guard Extension (Intel SGX) to the full spectrum of Ice Lake platforms, along with new features that include Intel Total Memory Encryption (Intel TME), Intel Platform Firmware Resilience (Intel PFR) and new cryptographic accelerators to strengthen the platform and improve the overall confidentiality and integrity of data.
Data is a critical asset both in terms of the business value it may yield and the personal information that must be protected, so cybersecurity is a top concern.
The security features in Ice Lake enable Intel’s customers to develop solutions that help improve their security posture and reduce risks related to privacy and compliance, such as regulated data in financial services and healthcare.
“Protecting data is essential to extracting value from it, and with the capabilities in the upcoming 3rd Gen Xeon Scalable platform, we will help our customers solve their toughest data challenges while improving data confidentiality and integrity. This extends our long history of partnering across the ecosystem to drive security innovations,” said Lisa Spelman, corporate vice president, Data Platform Group and general manager, Xeon and Memory Group at Intel.
Data protection across the compute stack
Technologies such as disk- and network-traffic encryption protect data in storage and during transmission, but data can be vulnerable to interception and tampering while in use in memory.
“Confidential computing” is a rapidly emerging usage category that protects data while it is in use in a Trusted Execution Environment (TEE). Intel SGX is the most researched, updated and battle-tested TEE for data center confidential computing, with the smallest attack surface within the system. It enables application isolation in private memory regions, called enclaves, to help protect up to 1 terabyte of code and data while in use.
“Microsoft Azure was the first major public cloud to offer confidential computing, and customers from industries including finance, healthcare, government are using confidential computing on Azure today,” said Mark Russinovich, CTO, Microsoft Azure.
“Azure has confidential computing options for virtual machines, containers, machine learning, and more. We believe the next generation Intel Xeon processors with Intel SGX featuring full memory encryption and cryptographic acceleration will help our customers unlock even more confidential computing scenarios.”
Customers like the University of California San Francisco, NEC, Magnit and other organizations in highly regulated industries have relied on Intel to support their security strategy and leveraged Intel SGX with proven results. For example, healthcare organizations can more securely protect data — including electronic health records — with a trusted computing environment that better preserves patient privacy.
In other industries, such as retail, companies rely on Intel to help keep data confidential and protect intellectual property. Intel SGX helps customers unlock new multi-party shared compute scenarios that have been difficult to build in the past due to privacy, security and regulatory requirements.
Full memory encryption
To better protect the entire memory of a platform, Ice Lake introduces a new feature called Intel Total Memory Encryption (Intel TME). Intel TME helps ensure that all memory accessed from the Intel CPU is encrypted, including customer credentials, encryption keys and other IP or personal information on the external memory bus.
Intel developed this feature to provide greater protection for system memory against hardware attacks, such as removing and reading the dual in-line memory module (DIMM) after spraying it with liquid nitrogen or installing purpose-built attack hardware.
Using the NIST storage encryption standard, AES XTS, an encryption key is generated using a hardened random number generator in the processor without exposure to software. This allows existing software to run unmodified while better protecting memory.
One of Intel’s design goals is to remove or reduce the performance impact of increased security so customers don’t have to choose between better protection and acceptable performance. Ice Lake introduces several new instructions used throughout the industry, coupled with algorithmic and software innovations, to deliver breakthrough cryptographic performance.
There are two fundamental innovations. The first is a technique to stitch together the operations of two algorithms that typically run in combination yet sequentially, allowing them to execute simultaneously. The second is a method to process multiple independent data buffers in parallel.
Sophisticated adversaries may attempt to compromise or disable the platform’s firmware to intercept data or take down the server. Ice Lake introduces Intel® Platform Firmware Resilience (Intel PFR) to the Intel Xeon Scalable platform to help protect against platform firmware attacks, designed to detect and correct them before they can compromise or disable the machine.
Intel PFR uses an Intel FPGA as a platform root of trust to validate critical-to-boot platform firmware components before any firmware code is executed. The firmware components protected can include BIOS Flash, BMC Flash, SPI Descriptor, Intel Management Engine and power supply firmware.
Privacy-preserving, trusted platforms in the upcoming 3rd generation Xeon Scalable processors will help drive even greater innovative services, usage models and solutions for organizations looking to activate the full value of their data.
Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.
“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.
Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.
“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)
What do we talk about when we talk about hardware security?
Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.
Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.
“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.
Hardware security challenges
But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.
She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.
“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”
Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.
“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.
“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”
Achieving security in low-end systems on chips
The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.
As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.
Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.
“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.
“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”
For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.
“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.
Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers.
With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024.
“By 2023, up to 70% of all enterprises will process data at the edge. 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers’ current needs, while setting the foundation for capabilities with advancements in AI and 5G,” said John Healy, Intel vice president of the Internet of Things Group and general manager of Platform Management and Customer Engineering.
Why It’s important
Intel works closely with customers to build proofs of concept, optimize solutions and collect feedback along the way. Innovations delivered with 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors are a response to challenges felt across the IoT industry: edge complexity, total cost of ownership and a range of environmental conditions.
Combining a common and seamless developer experience with software and tools like the Edge Software Hub’s Edge Insights for Industrial and the Intel Distribution of OpenVINO toolkit, Intel helps customers and developers get to market faster and deliver more powerful outcomes with optimized, containerized packages to enable sensing, vision, automation and other transformative edge applications.
For example, when combined with 11th Gen’s SuperFin process improvements and other enhancements, OpenVINO running on an 11th Gen Core i5 delivers amazing AI performance: up to 2 times faster inferences per second than a prior 8th Gen Core i5-8500 processor when running on just the CPU in each product.
About 11th Gen Core processors
Building on the recently announced client processors, 11th Gen Core is enhanced specifically for essential IoT applications that require high-speed processing, computer vision and low-latency deterministic computing.
It delivers up to a 23% performance gain in single-thread performance, a 19% gain in multithread performance and up to a 2.95x performance gain in graphics gen on gen. New dual-video decode boxes allow the processor to ingest up to 40 simultaneous video streams at 1080p 30 frames per second and output up to four channels of 4K or two channels of 8K video.
AI-inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with vector neural network instructions (VNNI) built in.
With Intel Time Coordinated Computing (Intel TCC Technology) and time-sensitive networking (TSN) technologies, 11th Gen processors enable real-time computing demands while delivering deterministic performance across a variety of use cases:
- Industrial sector: Mission-critical control systems (PLC, robotics, etc.), industrial PCs and human-machine interfaces.
- Retail, banking and hospitality: Intelligent, immersive digital signage, interactive kiosks and automated checkout.
- Healthcare: Next-generation medical imaging devices with high-resolution displays and AI-powered diagnostics.
- Smart city: Smart network video recorders with onboard AI inferencing and analytics.
Intel’s 11th Gen Core processors already have over 90 partners committed to delivering solutions to meet customers’ demands.
About Intel Atom x6000E Series and Intel Pentium and Celeron N and J series processors
These represent Intel’s first processor platform enhanced for IoT. They deliver enhanced real-time performance and efficiency; up to 2 times better 3D graphics; a dedicated real-time offload engine; Intel Programmable Services Engine, which supports out-of-band and in-band remote device management; enhanced I/O and storage options; and integrated 2.5GbE time-sensitive networking.
They can support 4Kp60 resolution on up to three simultaneous displays, meet strict functional safety requirements with the Intel Safety Island and include built-in hardware-based security. These processors have a variety of use cases, including:
- Industrial: Real-time control systems and devices that meet functional safety requirements for industrial robots and for chemical, oil field and energy grid-control applications.
- Transportation: Vehicle controls, fleet monitoring and management systems that synchronize inputs from multiple sensors and direct actions in semiautonomous buses, trains, ships and trucks.
- Healthcare: Medical displays, carts, service robots, entry-level ultrasound machines, gateways and kiosks that require AI and computer vision with reduced energy consumption.
- Retail and hospitality: Fixed and mobile point-of-sale systems for retail and quick service restaurant with high-resolution graphics.
On this September 2020 Patch Tuesday:
- Microsoft has plugged 129 security holes, including a critical RCE flaw that could be triggered by sending a specially crafted email to an affected Exchange Server installation
- Adobe has delivered security updates for Adobe Experience Manager, AEM Forms, Framemaker and InDesign
- Intel has released four security advisories
- SAP has released 10 security notes and updates to six previously released notes
Microsoft has released patches for 129 CVEs, 23 of which are “critical”, 105 “important”, and one “medium”-risk (a security feature bypass flaw in SQL Server Reporting Services). None of them are publicly known or being actively exploited.
Trend Micro Zero Day Initiative’s Dustin Childs says that patching CVE-2020-16875, a memory corruption vulnerability in Microsoft Exchange, should be top priority for organizations using the popular mail server.
“This patch corrects a vulnerability that allows an attacker to execute code at SYSTEM by sending a specially crafted email to an affected Exchange Server. That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” he explained. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon.”
Another interesting patch released this month is that for CVE-2020-0951, a security feature bypass flaw in Windows Defender Application Control (WDAC). Patches are available for Windows 10 and Windows Server 2016 and above.
“This patch is interesting for reasons beyond just the bug being fixed. An attacker with administrative privileges on a local machine could connect to a PowerShell session and send commands to execute arbitrary code. This behavior should be blocked by WDAC, which does make this an interesting bypass. However, what’s really interesting is that this is getting patched at all,” Childs explained.
“Vulnerabilities that require administrative access to exploit typically do not get patches. I’m curious about what makes this one different.”
Many of the critical and important flaws fixed this time affect various editions of Microsoft SharePoint (Server, Enterprise, Foundation). Some require authentication, but many do not, so if you don’t want to fall prey to exploits hidden in specially crafted web requests, pages or SharePoint application packages, see that you install the required updates soon.
Satnam Narang, staff research engineer at Tenable, pointed out that one of them – CVE-2020-1210 – is reminiscent of a similar SharePoint remote code execution flaw, CVE-2019-0604, that has been exploited in the wild by threat actors since at least April 2019.
CVE-2020-0922, a RCE in Microsoft COM (Common Object Model), should also be patched quickly on all Windows and Windows Server systems.
He also advised organizations in the financial industry who use Microsoft Dynamics 365 for Finance and Operations (on-premises) and Microsoft Dynamics 365 (on-premises) to quickly patch CVE-2020-16857 and CVE-2020-16862.
“Impacting the on-premise servers with this finance and operations focused service installed, both exploits require a specifically created file to exploit the security vulnerability, allowing the attacker to gain remote code execution capability. More concerning with these vulnerabilities is that both flaws, if exploited, would allow an attacker to steal documents and data deemed critical. Due to the nature and use of Microsoft Dynamics in the financial industry, a theft like this could spell trouble for any company of any size,” he added.
Jimmy Graham, Sr. Director of Product Management, Qualys, says that Windows Codecs, GDI+, Browser, COM, and Text Service Module vulnerabilities should be prioritized for workstation-type devices.
Adobe has released security updates for Adobe Experience Manager (AEM) – a web-based client-server system for building, managing and deploying commercial websites and related services – and the AEM Forms add-on package for all platforms, Adobe Framemaker for Windows and Adobe InDesign for macOS.
The AEM and AEM Forms updates are more important than the rest.
The Adobe Framemaker update fixes two critical flaws that could lead to code execution, and the Adobe InDesign update five of them, but as vulnerabilities in these two offerings are not often targeted by attackers, admins are advised to implement them after more critical updates are secured.
None of the fixed vulnerabilities are being currently exploited in the wild.
Intel took advantage of the September 2020 Patch Tuesday to release four advisories, accompanying fixes for the Intel Driver & Support Assistant, BIOS firmware for multiple Intel Platforms, and Intel Active Management Technology (AMT) and Intel Standard Manageability (ISM).
The latter fixes are the most important, as they fix a privilege escalation flaw that has been deemed to be “critical” for provisioned systems.
SAP marked the September 2020 Patch Tuesday by releasing 10 security notes and updates to six previously released ones (for SAP Solution Manager, SAP NetWeaver, SAPUI5 and SAP NetWeaver AS JAVA).
Patches have been provided for newly fixed flaws in a variety of offerings, including SAP Marketing, SAP NetWeaver, SAP Bank Analyzer, SAP S/4HANA Financial Products, SAP Business Objects Business Intelligence Platform, and others.
August 2020 Patch Tuesday was expectedly observed by Microsoft and Adobe, but many other software firms decided to push out security updates as well. Apple released iCloud for Windows updates and Google pushed out fixes to Chrome. They were followed by Intel, SAP and Citrix. Intel’s updates It’s not unusual for Intel to take advantage of a Patch Tuesday. This time they released 18 advisories. Among the fixed flaws are: DoS, Information Disclosure and EoP … More
The post Intel, SAP, and Citrix release critical security updates appeared first on Help Net Security.
As communications service providers (CoSPs) evolve their networks to support the rollout of future 5G networks, they are increasingly adopting a software-defined, virtualized infrastructure. Virtualization of the core network has already enabled CoSPs to improve operational costs and bring services to market faster. This expanded collaboration between Intel and VMware aims to offer CoSPs reduced development cycles and scale across multiple designs.
Many CoSPs are embracing the idea of having open and disaggregated RAN architectures that can give them added flexibility and choice, as well as programmability to create and deploy new services that require fine grained radio resource control and dynamic slicing to provide differentiated experiences such as cloud gaming and cloud controlled robotics. This collaboration seeks to simplify the steps and reduce the integration effort involved in creating deployable virtualized RAN solutions.
Intel and VMware will work with a rich ecosystem, including telecom equipment manufacturers, original equipment manufacturers and RAN software vendors, to help CoSPs more easily build on top of the vRAN platform to address specific use cases. As part of this effort, Intel and VMware will collaborate in building programmable open interfaces that leverage Intel’s FlexRAN software reference architecture and a VMware RAN Intelligent Controller (RIC), to enable development of innovative radio network functions using AI/ML learning for real time resource management, traffic steering and dynamic slicing. This in turn will assist in optimized QoE for rollout of new 5G vertical use cases.
“Many CoSPs are choosing to extend the benefits of network virtualization into the RAN for increased agility as they roll out new 5G services, but the software integration can be rather complex. With an integrated vRAN platform, combined with leading technology and expertise from Intel VMware, CoSPs are positioned to benefit from accelerated time to deployment of innovative services at the edge of their network,” explained Dan Rodriguez, corporate vice president and general manager, Network Platforms Group, Intel.
“CoSPs around the globe rely on VMware’s Telco Cloud platform to deploy and manage myriad core network functions. As they look to extend their software-defined infrastructure out to the RAN, there are tremendous benefits to delivering all network functions on a single platform,” said Shekar Ayyar, executive vice president and general manager, Telco and Edge Cloud, VMware. “With an integrated platform, CoSPs will be able to deploy new network functions across the same Telco Cloud architecture, from core to RAN, enabling the scale and agility needed to deliver services across a 5G network more efficiently.”
Intel unveils 3rd Gen Intel Xeon Scalable processors, additions to its hardware and software AI portfolio
Intel introduced its 3rd Gen Intel Xeon Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network and intelligent-edge environments.
As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s new 3rd Gen Xeon Scalable processors makes AI inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.
“The ability to rapidly deploy AI and data analytics is essential for today’s businesses. We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world’s data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data.” – Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group
AI and analytics open new opportunities for customers across a broad range of industries, including finance, healthcare, industrial, telecom and transportation.
IDC predicts that by 2021, 75% of commercial enterprise apps will use AI1. And by 2025, IDC estimates that roughly a quarter of all data generated will be created in real time, with various internet of things (IoT) devices creating 95% of that volume growth2.
Intel’s new data platforms, coupled with a thriving ecosystem of partners using Intel AI technologies, are optimized for businesses to monetize their data through the deployment of intelligent AI and analytics services.
New 3rd gen Intel Xeon Scalable Processors
Intel is further extending its investment in built-in AI acceleration in the new 3rd Gen Intel Xeon Scalable processors through the integration of bfloat16 support into the processor’s unique Intel DL Boost technology.
Bfloat16 is a compact numeric format that uses half the bits as today’s FP32 format but achieves comparable model accuracy with minimal — if any — software changes required. The addition of bfloat16 support accelerates both AI training and inference performance in the CPU.
Intel-optimized distributions for leading deep learning frameworks (including TensorFlow and Pytorch) support bfloat16 and are available through the Intel AI Analytics toolkit. Intel also delivers bfloat16 optimizations into its OpenVINO toolkit and the ONNX Runtime environment to ease inference deployments.
The 3rd Gen Intel Xeon Scalable processors (code-named “Cooper Lake”) evolve Intel’s 4- and 8-socket processor offering. The processor is designed for deep learning, virtual machine (VM) density, in-memory database, mission-critical applications and analytics-intensive workloads.
Customers refreshing aging infrastructure can expect an average estimated gain of 1.9 times on popular workloads3 and up to 2.2 times more VMs4 compared with 5-year-old 4-socket platform equivalents.
New Intel Optane persistent memory
As part of the 3rd Gen Intel Xeon Scalable platform, the company also announced the Intel Optane® persistent memory 200 series, providing customers up to 4.5TB of memory per socket to manage data intensive workloads, such as in-memory databases, dense virtualization, analytics and high-powered computing.
New Intel 3D NAND SSDs
For systems that store data in all-flash arrays, Intel announced the availability of its next-generation high-capacity Intel 3D NAND SSDs, the Intel SSD D7-P5500 and P5600.
These 3D NAND SSDs are built with Intel’s latest triple-level cell (TLC) 3D NAND technology and an all-new low-latency PCIe controller to meet the intense IO requirements of AI and analytics workloads and advanced features to improve IT efficiency and data security.
First Intel AI-optimized FPGA
Intel disclosed its upcoming Intel Stratix 10 NX FPGAs, Intel’s first AI-optimized FPGAs targeted for high-bandwidth, low-latency AI acceleration. These FPGAs will offer customers customizable, reconfigurable and scalable AI acceleration for compute-demanding applications such as natural language processing and fraud detection.
Intel Stratix 10 NX FPGAs include integrated high-bandwidth memory (HBM), high-performance networking capabilities and new AI-optimized arithmetic blocks called AI Tensor Blocks, which contain dense arrays of lower-precision multipliers typically used for AI model arithmetic.
OneAPI cross-architecture development for ongoing AI innovation
As Intel expands its advanced AI product portfolio to meet diverse customer needs, it is also paving the way to simplify heterogeneous programming for developers with its oneAPI cross-architecture tools portfolio to accelerate performance and increase productivity.
With these advanced tools, developers can accelerate AI workloads across Intel CPUs, GPUs and FPGAs, and future-proof their code for today’s and the next generations of Intel processors and accelerators.
Enhanced Intel Select Solutions portfolio addresses IT’s top requirements
Intel has enhanced its Select Solutions portfolio to accelerate deployment of IT’s most urgent requirements highlighting the value of pre-verified solution delivery in today’s rapidly evolving business climate. Three new and five enhanced Intel Select Solutions focused on analytics, AI and hyper-converged infrastructure are announced.
The enhanced Intel Select Solution for Genomics Analytics is being used around the world to find a vaccine for COVID-19 and the new Intel Select Solution for VMware Horizon VDI on vSAN is being used to enhance remote learning.
When products are available
The 3rd Gen Intel Xeon Scalable processors and Intel Optane persistent memory 200 series are shipping to customers today. In May, Facebook announced that 3rd Gen Intel Xeon Scalable processors are the foundation for its newest Open Compute Platform (OCP) servers, and other leading CSPs, including Alibaba, Baidu and Tencent, have announced they are adopting the next-generation processors.
General OEM systems availability is expected in the second half of 2020. The Intel SSD D7-P5500 and P5600 3D NAND SSDs are available now. And the Intel Stratix 10 NX FPGA is expected to be available in the second half of 2020.
19 vulnerabilities – some of them allowing remote code execution – have been discovered in a TCP/IP stack/library used in hundreds of millions of IoT and OT devices deployed by organizations in a wide variety of industries and sectors.
“Affected vendors range from one-person boutique shops to Fortune 500 multinational corporations, including HP, Schneider Electric, Intel, Rockwell Automation, Caterpillar, Baxter, as well as many other major international vendors,” say the researchers who discovered the flaws.
About the vulnerable TCP/IP software library
The vulnerable library was developed by US-based Treck and a Japanese company named Elmic Systems (now Zuken Elmic) in the 1990s. At one point in time, the two companies parted ways and each continued developing a separate branch of the stack/library.
The one developed by Treck – Treck TCP/IP – is marketed in the U.S. and the other one, dubbed Kasago TCP/IP, is marketed by Zuken Elmic in Asia.
The library’s high reliability, performance, and configurability is what made it so popular and widely deployed.
“The [Treck TCP/IP] library could be used as-is, configured for a wide range of uses, or incorporated into a larger library. The user could buy the library in source code format and edit it extensively. It can be incorporated into the code and implanted into a wide range of device types,” the researchers explained.
“The original purchaser could decide to rebrand, or could be acquired by a different corporation, with the original library history lost in company archives. Over time, the original library component could become virtually unrecognizable. This is why, long after the original vulnerability was identified and patched, vulnerabilities may still remain in the field, since tracing the supply chain trail may be practically impossible.”
The vulnerabilities were discovered by Moshe Kol and Shlomi Oberman from JSOF in the Treck TCP/IP library, and Zuken Elmic confirmed that some of them affect the Kasago library.
About the vulnerabilities
Collectively dubbed Ripple20, the vulnerabilities (numbered CVE-2020-11896 through CVE-2020-11914) range from critical to low-risk. Four enable remote code execution. Others could be used to achieve sensitive information disclosure, (persistent) denial of service, and more.
“One of the critical vulnerabilities is in the DNS protocol and may potentially be exploitable by a sophisticated attacker over the internet, from outside the network boundaries, even on devices that are not connected to the internet,” the researchers noted.
“Most of the vulnerabilities are true zero-days, with 4 of them having been closed over the years as part of routine code changes, but remained open in some of the affected devices (3 lower severity, 1 higher). Many of the vulnerabilities have several variants due to the stack configurability and code changes over the years.”
The researchers plan to release technical reports on some of them and are scheduled to demonstrate exploitation of the DNS vulnerability on a Schneider Electric APC UPS device at Black Hat USA in August.
The Treck TCP/IP library did not receive much attention from security researchers in the past. After JSOF researchers decided to probe it and discovered the flaws, they also discovered that contacting the many, many vendors who implement it was going to be a time-consuming task.
Treck was made aware of the vulnerabilities and fixed them, but insisted on contacting clients and users of the code library themselves and to provide the appropriate patches directly.
But, since some of the vulnerabilities affect also the Kasago library, JSOF involved multiple national computer emergency response team (CERT) organizations and regulators in the disclosure process.
“CERT groups focus on ways to identify and mitigate security risks. For example, they can reach a much larger target group of potential users with blast announcements, ‘mass-mailings’ that they broadcast to a long list of participating companies to notify them of the potential vulnerability. Once users are identified, mitigation comes into play,” the researchers explained.
“While the best response might be to install the original Treck patch, there are many situations in which installing the original patch is not possible. CERTs work to develop alternative approaches that can be used to minimize or effectively eliminate the risk, even if patching is not an option.”
The Ripple20 vulnerabilities have been dubbed thusly because of extent of its impact.
“The wide-spread dissemination of the software library (and its internal vulnerabilities) was a natural consequence of the supply chain ‘ripple-effect’. A single vulnerable component, though it may be relatively small in and of itself, can ripple outward to impact a wide range of industries, applications, companies, and people,” they noted.
“The inclusion of the number ’20’ denotes our disclosure process beginning in 2020, while additionally symbolizing and giving deference to our belief in the potential for additional vulnerabilities to be found from the original 19,” they told Help Net Security.
The researchers have pointed out that the vulnerability disclosure process, their own efforts to identify users of the Treck library, and the patch/mitigation dissemination process have been immensely aided by Treck, various CERTs, the CISA, and several security vendors (Forescout, CyberMDX).
A number of vendors have confirmed that their offerings are affected by the Ripple20 flaws. JSOF has compiled a list of affected and non affected vendors, which will be constantly updated as additional information becomes available.
Device vendors should update the Treck library to a fixed version (184.108.40.206 or higher), while organizations should check their network for affected devices and contact the vendors for more information on how to mitigate the exploitation risk. The researchers will make available, upon request, a script to help companies identify Treck products on their networks.
“Fixing these vulnerabilities presents its own set of challenges, even once they’ve been identified on the network. Some already have patches available. But there are also complicating factors,” Forescout CEO and President Michael DeCesare noted.
“With these types of supply chain vulnerabilities and embedded components, the vendor that is creating the patch isn’t necessarily the one that will release it. That can delay the issuance of a patch. There are also no guarantees that the device vendor is still in business, or that they still support the device. The complex nature of the supply chain may also mean the device is not patchable at all, even if it needs to remain on the network. In such cases, mitigating controls such as segmentation will be needed to limit its risk.”
The various CERTs and agencies like CISA will surely offer mitigation advice via security advisories.
The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.
Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.
ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.
ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.
Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.
CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.
“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”
Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.
Those who do not remember the past
Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.
A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.
It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:
One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.
Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.
Built into virtually every hardware device, firmware is lower-level software that is programmed to ensure that hardware functions properly.
As software security has been significantly hardened over the past two decades, hackers have responded by moving down the stack to focus on firmware entry points. Firmware offers a target that basic security controls can’t access or scan as easily as software, while allowing them to persist and continue leveraging many of their tried and true attack techniques.
The industry has reacted to this shift in attackers’ focus by making advancements in firmware security solutions and best practices over the past decade. That said, many organizations are still suffering from firmware security blind spots that prevent them from adequately protecting systems and data.
This can be caused by a variety of factors, from simple platform misconfigurations or reluctance about installing new updates to a general lack of awareness about the imperative need for firmware security.
In short, many don’t know what firmware security hazards exist today. To help readers stay more informed, here are three firmware security blind spots every organization should consider addressing to improve its overall security stance:
1. Firmware security awareness
The security of firmware running on the devices we use every day has been a novel focus point for researchers across the security community. With multiple components running a variety of different firmware, it might be overwhelming to know where to start. A good first step is recognizing firmware as an asset in your organization’s threat model and establishing the security objectives towards confidentiality, integrity, and availability (CIA). Here are some examples of how CIA applies to firmware security:
- Confidentiality: There may be secrets in firmware that require protection. The BIOS password, for instance, might grant attackers authentication bypass if they were able to access firmware contents.
- Integrity: This means ensuring the firmware running on a system is the firmware intended to be running and hasn’t been corrupted or modified. Features such as secure boot and hardware roots of trust support the measurement and verification of the firmware you’re running.
- Availability: In most cases, ensuring devices have access to their firmware in order to operate normally is the top priority for an organization as far as firmware is concerned. A potential breach of this security objective would come in the form of a permanent denial of service (PDoS) attack, which would require manual re-flashing of system components (a sometimes costly and cumbersome solution).
The first step toward firmware security is awareness of its importance as an asset to an organization’s threat model, along with the definition of CIA objectives.
2. Firmware updates
The increase in low-level security research has led to an equivalent increase in findings and fixes provided by vendors, contributing to the gradual improvement of platform resilience. Vendors often work with researchers through their bug bounty programs, their in-house research teams, and with researchers presenting their work in conferences around the world, in order to conduct coordinated disclosure of firmware security vulnerabilities. The industry has come a long way enabling collaboration, enabling processes and accelerating response times towards a common goal: improving the overall health and resilience of computer systems.
The firmware update process can be complex and time consuming, and involves a variety of parties: researchers, device manufacturers, OEM’s, etc. For example, once UEFI’s EDK II source code has been updated with a new fix, vendors must adopt it and push the changes out to end customers. Vendors issue firmware updates for a variety of reasons, but some of the most important patches are designed explicitly to address newly discovered security vulnerabilities.
Regular firmware updates are vital to a strong security posture, but many organizations are hesitant to introduce new patches due to a range of factors. Whether it’s concerns over the potential time or cost involved, or fear of platform bricking potential, there are a variety of reasons why updates are left uninstalled. Delaying or forgoing available fixes, however, increases the amount of time your organization may be at risk.
A good example of this is WannaCry. Although Microsoft had previously released updates to address the exploit, the WannaCry ransomware wreaked havoc on hundreds of thousands of unpatched computers throughout the spring of 2017, affecting hundreds of countries and causing billions of dollars in damages. While this outbreak wasn’t the result of a firmware vulnerability specifically, it offers a stark illustration of what can happen when organizations choose not to apply patches for known threats.
Installing firmware updates regularly is arguably one of the most simple and powerful steps you can take toward better security today. Without them, your organization will be at greater risk of sustaining a security incident, unaware of fixes for known vulnerabilities.
If you’re concerned that installing firmware updates might inadvertently break your organization’s systems, consider conducting field tests on a small batch of systems before rolling them out company-wide and remember to always have a backup of the current image of your platform to revert back to as a precautionary measure. Be sure to establish a firmware update cadence that works for your organization in order to keep your systems up to date with current firmware protections at minimal risk.
3. Platform misconfigurations
Another issue that can cause firmware security risks is platform misconfigurations. Once powered on, a platform follows a complex set of steps to properly configure the computer for runtime operations. There are many time- and sequence-based elements and expectations for how firmware and hardware interact during this process, and security assumptions can be broken if the platform isn’t set up properly.
Disabled security features such as secure boot, VT-d, port protections (like Thunderbolt), execution prevention, and more are examples of potentially costly platform misconfigurations. All sorts of firmware security risks can arise if an engineer forgets a key configuration step or fails to properly configure one of the hundreds of bits involved.
Most platform misconfigurations are difficult to detect without automated security validation tools because different generations of platforms may have registers defined differently, there are a long list of things to check for, and there might be dependencies between the settings. It can quickly become cumbersome to keep track of proper platform configurations in a cumulative way.
Fortunately, tools like the Intel-led, open-source Chipsec project can scan for configuration anomalies within your platform and evaluate security-sensitive bits within your firmware to identify misconfigurations automatically. As a truly cumulative, open-source tool, Chipsec is updated regularly with the most recent threat insights so organizations everywhere can benefit from an ever-growing body of industry research. Chipsec also has the ability to automatically detect the platform being run in order to set register definitions. On top of scanning, it also offers several firmware security tools including fuzzing, manual testing, and forensic analysis.
Although there are a few solutions with the capability to inspect a systems’ configuration, running a Chipsec scan is a free and quick way to ensure a particular system’s settings are set to recommended values.
Your organization runs on numerous hardware devices, each with its own collection of firmware. As attackers continue to set their sights further down the stack in 2020 and beyond, firmware security will be an important focus for every organization. Ensure your organization properly prioritizes defenses for this growing threat vector, install firmware updates regularly, commit to continuously detect potential platform misconfigurations, and enable available security features and their respective policies in order to harden firmware resiliency towards confidentiality, integrity and availability.
Computer scientists at KU Leuven have once again exposed a security flaw in Intel processors. Jo Van Bulck, Frank Piessens, and their colleagues in Austria, the United States, and Australia gave the manufacturer one year’s time to fix the problem.
Load Value Injection
Plundervolt, Zombieload, Foreshadow: in the past couple of years, Intel has had to issue quite a few patches for vulnerabilities that computer scientists at KU Leuven have helped to expose. “All measures that Intel has taken so far to boost the security of its processors have been necessary, but they were not enough to ward off our new attack,” says Jo Van Bulck from the Department of Computer Science at KU Leuven.
Like the previous attacks, the new technique – dubbed Load Value Injection – targets the ‘vault’ of computer systems with Intel processors: SGX enclaves.
“To a certain extent, this attack picks up where our Foreshadow attack of 2018 left off. A particularly dangerous version of this attack exploited the vulnerability of SGX enclaves, so that the victim’s passwords, medical information, or other sensitive information was leaked to the attacker.
“Load Value Injection uses that same vulnerability, but in the opposite direction: the attacker’s data are smuggled – ‘injected’ – into a software program that the victim is running on their computer. Once that is done, the attacker can take over the entire program and acquire sensitive information, such as the victim’s fingerprints or passwords.”
Giving Intel enough time to fix the problem
The vulnerability was already discovered on 4 April 2019. Nevertheless, the researchers and Intel agreed to keep it a secret for almost a year. Responsible disclosure embargoes are not unusual when it comes to cybersecurity, although they usually lift after a shorter period of time.
“We wanted to give Intel enough time to fix the problem. In certain scenarios, the vulnerability we exposed is very dangerous and extremely difficult to deal with because, this time, the problem did not just pertain to the hardware: the solution also had to take software into account. Therefore, hardware updates like the ones issued to resolve the previous flaws were no longer enough. This is why we agreed upon an exceptionally long embargo period with the manufacturer.”
“Intel ended up taking extensive measures that force the developers of SGX enclave software to update their applications. However, Intel has notified them in time. End-users of the software have nothing to worry about: they only need to install the recommended updates.”
“Our findings show, however, that the measures taken by Intel make SGX enclave software up to 2 to even 19 times slower.”
What are SGX enclaves?
Computer systems are made up of different layers, making them very complex. Every layer also contains millions of lines of computer code. As this code is still written manually, the risk for errors is significant.
If such an error occurs, the entire computer system is left vulnerable to attacks. You can compare it to a skyscraper: if one of the floors becomes damaged, the entire building might collapse.
Viruses exploit such errors to gain access to sensitive or personal information on the computer, from holiday pictures and passwords to business secrets.
In order to protect their processors against this kind of intrusions, IT company Intel introduced an innovative technology in 2015: Intel Software Guard eXtensions (Intel SGX). This technology creates isolated environments in the computer’s memory, so-called enclaves, where data and programs can be used securely.
“If you look at a computer system as a skyscraper, the enclaves form a vault”, researcher Jo Van Bulck explains. “Even when the building collapses the vault should still guard its secrets – including passwords or medical data.”
The technology seemed watertight until August 2018, when researchers at KU Leuven discovered a breach. Their attack was dubbed Foreshadow. In 2019, the Plundervolt attack revealed another vulnerability. Intel has released updates to resolves both flaws.
Radisys delivers its Engage AI-based media apps on OpenNESS to accelerate 4G and 5G networks innovation
Radisys, a global leader of open telecom solutions, announced the deployment of the Radisys Engage portfolio of digital engagement and AI-based real-time media applications on Open Network Edge Services Software (OpenNESS), an open source multi-access edge compute (MEC) platform initiative led by Intel to accelerate innovation and unique experiences on 4G/LTE and 5G networks.
The advent of 5G and massive IoT applications require ultra-low latency, high-bandwidth, and real-time access to radio network resources, leading to the rise of multi-access edge computing to enable virtualized applications to be deployed on compute resources closest to the edge.
However, the lack of broad industry standardization for MEC has led to fragmentation in the development and deployment of MEC platforms, thereby hindering wide-scale adoption.
The OpenNESS platform abstracts complex networking technology and provides microservices/APIs resulting in an easy-to-use toolkit to develop and deploy applications at the network edge.
Radisys’ Engage advanced real-time media applications are available on the OpenNESS platform, enabling new digital experiences.
- Radisys’ programmable computer vision and analytics applications require ultra-low latency and high-bandwidth consumption for software processing of live video streams to “see” in real-time, enabling enhanced security, IoT, remote monitoring, immersive communication applications and more.
- Radisys’ AR, VR, 360 video and speech recognition capabilities with real-time media analytics applications on a distributed edge compute infrastructure are enabling a new generation of 4G and 5G monetizable services.
- Radisys’ in-network biometric authentication enhances secure access to applications and remote locations.
“We are pleased to deliver a complete and open MEC platform that comes with ready to deploy edge media applications,” said Adnan Saleem, CTO, Software and Cloud Solutions, Radisys.
“Through collaboration with Intel, our solution will help service providers to realize the ultra-low latency benefits of 5G, while enabling rich new applications like augmented reality, localized collaboration, improved security, and more.”
“Intel is collaborating with the Network Builders ecosystem to deliver open solutions that enable service providers to accelerate innovation while controlling complexity and costs,” said Renu Navale, Vice President & General Manager, Edge Computing and Ecosystem Enabling at Intel.
“By adopting and integrating OpenNESS – Intel’s open source software for network edge, Radisys’ Engage media-centric applications provide the industry with a unique platform for real-time media applications and services.”
We’ve all shared the frustration when it comes to errors – software updates that are intended to make our applications run faster inadvertently end up doing just the opposite. These bugs, dubbed in the computer science field as performance regressions, are time-consuming to fix since locating software errors normally requires substantial human intervention.
Schematic illustrating how Muzahid’s deep learning algorithm works. The algorithm is ready for anomaly detection after it is first trained on performance counter data from a bug-free version of a program.
To overcome this obstacle, researchers at Texas A&M University, in collaboration with computer scientists at Intel Labs, have now developed a complete automated way of identifying the source of errors caused by software updates.
The deep learning algorithm
Their algorithm, based on a specialized form of machine learning called deep learning, is not only turnkey, but also quick, finding performance bugs in a matter of a few hours instead of days.
“Updating software can sometimes turn on you when errors creep in and cause slowdowns. This problem is even more exaggerated for companies that use large-scale software systems that are continuously evolving,” said Dr. Abdullah Muzahid, assistant professor in the Department of Computer Science and Engineering.
“We have designed a convenient tool for diagnosing performance regressions that is compatible with a whole range of software and programming languages, expanding its usefulness tremendously.”
How does it work?
To pinpoint the source of errors within a software, debuggers often check the status of performance counters within the central processing unit. These counters are lines of code that monitor how the program is being executed on the computer’s hardware in the memory, for example.
So, when the software runs, counters keep track of the number of times it accesses certain memory locations, the time it stays there and when it exits, among other things. Hence, when the software’s behavior goes awry, counters are again used for diagnostics.
“Performance counters give an idea of the execution health of the program,” said Muzahid. “So, if some program is not running as it is supposed to, these counters will usually have the telltale sign of anomalous behavior.”
However, newer desktops and servers have hundreds of performance counters, making it virtually impossible to keep track of all of their statuses manually and then look for aberrant patterns that are indicative of a performance error. That is where Muzahid’s machine learning comes in.
By using deep learning, the researchers were able to monitor data coming from a large number of the counters simultaneously by reducing the size of the data, which is similar to compressing a high-resolution image to a fraction of its original size by changing its format. In the lower dimensional data, their algorithm could then look for patterns that deviate from normal.
The versatility of the algorithm
When their algorithm was ready, the researchers tested if it could find and diagnose a performance bug in a commercially available data management software used by companies to keep track of their numbers and figures. First, they trained their algorithm to recognize normal counter data by running an older, glitch-free version of the data management software.
Next, they ran their algorithm on an updated version of the software with the performance regression. They found that their algorithm located and diagnosed the bug within a few hours. Muzahid said this type of analysis could take a considerable amount of time if done manually.
In addition to diagnosing performance regressions in software, Muzahid noted that their deep learning algorithm has potential uses in other areas of research as well, such as developing the technology needed for autonomous driving.
“The basic idea is once again the same, that is being able to detect an anomalous pattern,” said Muzahid. “Self-driving cars must be able to detect whether a car or a human is in front of it and then act accordingly. So, it’s again a form of anomaly detection and the good news is that is what our algorithm is already designed to do.”
As modern computer systems become more complex and interconnected, we are seeing more vulnerabilities than ever before. As attacks become more pervasive and sophisticated, they are often progressing past the software layer and compromising hardware. As a response, the industry has been working to deliver microarchitectural improvements and today, implementing hardware-based security is widely recognized as a best practice.
However, hardware-based security has its own set of challenges when not designed, implemented or verified properly. Combined with the fact that we are seeing increasingly sophisticated methods to exploit hardware by chaining them together with software vulnerabilities, it’s evident that the industry needs a better and more in-depth understanding of the common hardware security vulnerabilities taxonomy, including information on how these vulnerabilities get introduced into products, how they can be exploited, their associated risks, as well as best practices to prevent and identify them early on in the product development lifecycle.
Today, a key resource for tracking software vulnerabilities exists in MITRE’s Common Weakness Enumeration (CWE) system, which is also complemented by the Common Vulnerability and Exposures (CVE) system.
A simple way to differentiate the two is that CWE includes a taxonomy of common security vulnerability types and provides different views for a user to traverse different categorical buckets, whereas the CVE maintains the list of specific vulnerability instances that have already been found and reported publicly. Multiple CVEs are usually mapped to specific CWEs.
Essentially, the two systems work hand-in-hand to provide the ultimate vulnerability reference guide. These resources aim to educate architects and developers to identify potential mistakes when designing and developing software products. At the same time, they enable security researchers and tool vendors to pinpoint current gaps, so they can offer better tools and methodologies to automate the detection of common software security issues.
With the growing awareness of hardware vulnerabilities, the CWE could be enhanced to include relevant entry points, common consequences, examples, countermeasures and detection methods from the specific hardware perspective. Furthermore, there are hardware-centric weaknesses that are related to the physical properties of hardware devices (e.g., temperature, voltage glitches, current, wear out, interference, and more) which the CWE does not yet categorize.
Due to these missing reference materials for hardware vulnerabilities in the CWE, researchers do not have the same standard taxonomy that would enable them to share information and techniques with one another. If we expect hardware vendors and their partners to collectively deliver more secure solutions, we must have a common language for discussing hardware security vulnerabilities.
Over the past few years, Intel researchers have been active in raising public awareness on common hardware security vulnerabilities (through academia, at conferences, and even with the industry’s first hardware capture-the-flag competition). But more can always be done. Here are six ways the industry would benefit from a standardized Hardware CWE:
1. Product architects and designers could gain a deeper understanding of the common hardware security pitfalls, allowing them to potentially avoid repeat mistakes when creating solutions.
2. Verification engineers could be more fluent in the commonly made security mistakes and how they can be effectively detected at various stages of the product development lifecycle. This would enable them to devise proper verification plan and test strategies for improving the security robustness of products.
3. Security architects and researchers could better focus their energy on systemic issues and work to identify effective mitigations that help eliminate risks and/or make exploitation much more difficult for attackers.
4. Electronic Design Automation (EDA) vendors could prioritize and expand their verification tool features and offerings. This could improve the effectiveness of their tools in guiding users to avoid the introduction of common vulnerabilities. It could also provide a common platform for EDA tool users to compare and benchmark the capabilities of different tool options, enabling them to identify the right ones that meet their specific needs.
5. Educators could develop training materials and best practices that focus on the most relevant areas of concern, so university curriculum and corporate trainings could help audiences gain the necessary skills they need.
6. Security researchers could leverage a common taxonomy to communicate without ambiguities, facilitating learning exchange, systematic study and collaboration. And a public database would also make the research field more accessible for aspiring researchers.
As our industry moves forward to combat the latest threats, it is vital that we invest in research, tooling and the proper resources to catalog and evaluate both software and hardware vulnerabilities.
Today, categorizing hardware vulnerabilities, root causes, and mitigation strategies often feels like an uphill battle. As hardware vulnerabilities continue to get more complex and challenging for the industry, creating a common taxonomy for discussing, documenting and sharing hardware-based threats becomes paramount.
Let’s work together as an industry to ensure that we are speaking the same language when it comes to researching and mitigating the hardware vulnerabilities of the future.
- Arun Kanuparthi, Offensive Security Researcher, Intel
- Hareesh Khattri, Offensive Security Researcher, Intel
Ping An Insurance announced that Ping An Technology and Intel signed a strategic collaboration agreement in Shenzhen, China. The two companies plan to establish a joint laboratory, cooperate on products and technology, and form a joint project team in areas of high-performance computing, including storage, network, cloud, artificial intelligence (AI) and security.
The signing ceremony included Ping An Technology leaders Ericson Chan, CEO; William Fang, Chief Technology Officer; and Huang Wei, Strategic Partner and General Manager of Ecosystems; and Intel leaders Rose Schooler, Corporate Vice President, Sales and Marketing Group; General Manager, Data Center Group, Wang Rui, Vice President, Sales and Marketing Group; PRC Country Manager, and Liang Yali, Intel Biz Consumption Group, General Manager, PRC.
Mr. Chan said, “Partnering with Intel will give Ping An an edge to boost our cloud technologies and to supercharge our AI-based services and solutions. We will further strengthen our data protection with Intel hardware-enabled security in finance and healthcare, two areas where it is so critical.”
Ms. Schooler said, “The two parties will explore joint development in technology areas including AI, high performance computing, visual computing and FPGAs using the full range of Intel’s data-centric portfolio. We plan to innovate and support an open ecosystem Ping An Technology’s Ping An Cloud.”
Backed by the strong financial background of Ping An and the world-class computer technology of Intel, Ping An Technology develops and applies technologies in a wide range of scenarios to support five ecosystems: financial services, health care, auto services, real estate services, and smart city services.
Ping An Cloud has yielded numerous innovative results with Intel on the development of financial private and public cloud. Both parties will continue to generate more competitive products and services together, in order to increase the effect of “minimizing costs and maximizing efficiency” in technology innovation.