A new threat matrix outlines attacks against machine learning systems

A report published last year has noted that most attacks against artificial intelligence (AI) systems are focused on manipulating them (e.g., influencing recommendation systems to favor specific content), but that new attacks using machine learning (ML) are within attackers’ capabilities.

attacks machine learning systems

Microsoft now says that attacks on machine learning (ML) systems are on the uptick and MITRE notes that, in the last three years, “major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled.” At the same time, most businesses don’t have the right tools in place to secure their ML systems and are looking for guidance.

Experts at Microsoft, MITRE, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning and several other companies and educational organizations have therefore decided to create the first version of the Adversarial ML Threat Matrix, to help security analysts detect and respond to this new type of threat.

What is machine learning (ML)?

Machine learning is a subset of artificial intelligence (AI). It is based on computer algorithms that ingest “training” data and “learn” from it, and finally deliver predictions, decisions, or accurately classify things.

Machine learning algorithms are used for tasks like identifying spam, detecting new threats, predicting user preferences, performing medical diagnoses, and so on.

Security should be built in

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security.

We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created.

“With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted.

Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says.

The Adversarial ML Threat Matrix

“Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.

The matrix has been modeled on the MITRE ATT&CK framework.

attacks machine learning systems

The group has demonstrated how previous attacks – whether by researchers, read teams or online mobs – can be mapped to the matrix.

They also stressed that it’s going to be routinely updated as feedback from the security and adversarial machine learning community is received. They encourage contributors to point out new techniques, propose best (defense) practices, and share examples of successful attacks on machine learning (ML) systems.

“We are especially excited for new case-studies! We look forward to contributions from both industry and academic researchers,” MITRE concluded.

Inspur launches Cloud SmartNIC solution based on NVIDIA BlueField-2 DPU at GTC 2020

Inspur unveiled its Cloud SmartNIC solution based on NVIDIA BlueField-2 data processing unit (DPU) at GTC 2020.

The Inspur Cloud SmartNIC solution deeply integrates the Inspur Server with NVIDIA DPU, enabling the combined capabilities of embedded processing, SmartNIC networking and high-performance PCIe 4.0 host interface, which can deliver hyper-performance network acceleration at a speed of up to 200Gb/s.

The Cloud SmartNIC is used for offloading functions like traffic management, storage virtualization, and security isolation, significantly freeing CPU computing resources, and delivering efficient software-defined, hardware-accelerated services for AI, big data analytics, cloud, virtualization, microsegmentation, and next-generation firewalls.

The DPU, a key part of network computing, is a new accelerated computing element. It is an integrated system on a chip that combines a programmable multi-core Arm CPU, state-of-the-art SmartNIC networking, high-performance PCIe interface, and a powerful set of networking, storage, and security features.

It offloads functions like software-defined networking (SDN), software-defined storage (SDS), and encryption and security processing from the host CPU. In the traditional model, legacy appliances or the CPU were required to run data center services.

In the hardware-accelerated, NVIDIA BlueField DPU-enabled server, these services are offloaded to the DPU, freeing the CPU to run applications, and accelerating data center services that are safe, reliable, convenient, and powerful.

Accelerating the software-defined data center

Today’s data centers must run a combination of modern, accelerated applications – such as AI and high-performance data analytics, alongside existing legacy applications.

Traditional data center networking, storage, and security technologies can effectively deal with north-south traffic coming in and out of the data center. But they are inadequate to address distributed, cloud-native, accelerated workloads based on dynamic microservices.

These services move around the data center as workloads scale out, and most of the traffic is east-west, or between nodes within the data center. Moreover, fixed-function security appliances lack the flexibility to scale and support cloud-native applications, and thus expose a large attack surface through unprotected east-west communications and virtual machine (VM)-to-VM traffic.

The software-defined data center implements networking, storage, and security functions as software running on powerful servers, and is more flexible and scalable than architectures based on fixed appliances. It also achieves application compatibility and resource scalability by pushing data center functions into software running on VMs or containers.

However, this flexibility and scalability comes at the expense of additional CPU loading as a result of software-defined services and resource virtualization. The challenge is to conserve precious CPU resources while efficiently integrating and accelerating cloud, data access, and AI capabilities with the scalability of software-defined data centers and cloud-native applications.

Through a dedicated intelligent hardware-accelerated data center services chip, the Cloud SmartNIC solution enables advanced networking functions of the software-defined data center, such as virtual switching and routing, load balancing, and virtual machine and container networking services.

For storage, the NVMe controller is accelerated by the DPU to allow high-performance flash to be spread across all the nodes in the data center and offer elastic block storage capabilities to applications.

As the foundation of a secure platform, the DPU offers a hardware root of trust, secure firmware authentication and updates, and encryption accelerators. Additional advanced security accelerators offload connection tracking, deep packet inspection, and regular expression matching to accelerate next-generation firewalls and intrusion detection and prevention systems.

Liu Jun, GM of AI and HPC at Inspur, noted that, “Inspur is innovating four key processes in the data center —producing, scheduling, aggregating, and releasing AI computing power.

“The Cloud SmartNIC solution can efficiently empower computing power aggregation, deliver maximum computing power, and effectively tackle major challenges in big data analytics, data processing, and hyperscale AI model training.”

“Solutions that build on NVIDIA BlueField-2 DPUs deliver more efficient accelerated networking functions for users of enterprise applications. Optimized to offload critical networking, storage and security tasks from CPUs, BlueField-2 DPUs enable organizations to transform their IT infrastructure into state-of-the-art data centers that are accelerated, fully programmable and armed with ‘zero-trust’ security features to prevent data breaches and cyberattacks.” said Erik Pounds, head of product marketing, enterprise computing, at NVIDIA.

Use an NVIDIA GPU? Check whether you need security updates

NVIDIA has released security updates for the NVIDIA GPU Display Driver and the NVIDIA Virtual GPU Manager that fix a variety of serious vulnerabilities.

NVIDIA GPU security updates

The driver security update should be implemented by users of the company’s desktop, workstation and data center GPUs, while the vGPU software update is available for the Virtual GPU Manager component on Citrix Hypervisor, VMware vSphere, Red Hat Enterprise Linux KVM, and Nutanix AHV enterprise virtualization solutions.

NVIDIA GPU Display Driver security updates

Four security holes have been plugged in the Display Driver:

  • CVE‑2020‑5979 affects the Control Panel component and may lead to privilege escalation
  • CVE‑2020‑5980 affects multiple components and may lead to code execution or DOS
  • CVE‑2020‑5981 affects the DirectX11 user mode driver and can, according to NVIDIA, lead to DoS
  • CVE‑2020‑5982 affects the kernel mode layer and can lead to DoS.

CVE‑2020‑5980 was unearthed by Andy Gill of Pen Test Partners and the discovery detailed in a blog post published on Thursday.

The vulnerability allows for DLL hijacking, i.e., exploitation of execution flow of an application via external DLLs.

“If a vulnerable application is configured to run at a higher privilege level, then the malicious DLL that is loaded will also be executed at a higher level, thus achieving escalation of privilege. Often the application will behave no differently because malicious DLLs may also be configured to load the legitimate DLLs they were meant to replace or where a DLL doesn’t exist,” Gill explained.


CVE‑2020‑5981 was discovered by Piotr Bania of Cisco Talos. The CVE number covers multiple vulnerabilities and, Cisco claims, they could be exploited to achieve remote code execution (and not just DoS).

“An adversary could exploit these vulnerabilities by supplying the user with a malformed shader, eventually allowing them to execute code on the victim machine. These bugs could also allow the attacker to perform a guest-to-host escape through Hyper-V RemoteFX on Windows machines,” they say.

Users are advised to check which NVIDIA display driver version is currently installed on their system(s) and update it if necessary (updates are available from here).

NVIDIA vGPU Software security updates

Vulnerabilities CVE‑2020‑5983 to CVE‑2020‑5989 are found in the vGPU plugin and could lead to DoS, information disclosure, code execution, tampering, and privilege escalation.

Users are advised to upgrade to vGPU Software versions 11.1, 10.4, or 8.5 – updates are available through the NVIDIA Licensing Portal.

Keysight and NVIDIA boost development of flexible virtualized networks and high-value mobile services

Keysight Technologies, a leading technology company that helps enterprises, service providers and governments accelerate innovation to connect and secure the world, announced it is accelerating the development of flexible virtualized networks and high-value mobile services with NVIDIA.

Mobile operators are in the process of transforming their networks, using a dynamic virtualized radio access network (vRAN) architecture and open RAN (O-RAN) standard interfaces, to cost-effectively and flexibly deliver a broad range of services that rely on low latencies and high throughput.

A software-defined, elastic network architecture allows mobile operators to meet the requirements of a new digital era where consumer and industry users will benefit from high-value services in augmented reality/virtual reality (AR/VR), autonomous driving, smart factories and gaming.

Keysight offers solutions that enable mobile operators and network equipment manufacturers (NEMs) to validate 5G and legacy radio access networks as well as core networks that are critical to ensuring the end-user experience of applications using vRAN architecture.

Keysight’s suite of UE emulation (UEE) solutions address specifications set by both 3GPP and O-RAN standards organizations, enabling NVIDIA to test the NVIDIA Aerial software development kit (SDK) via enhanced Common Public Radio Interfaces (eCPRI).

The Aerial SDK enables telcos to build and deploy the most programmable and scalable software-defined 5G virtual radio access networks (RANs) to deliver new AI and IoT services at the edge.

“Our collaboration with NVIDIA, using Keysight’s automated and scalable 5G UE emulation solutions, helps the mobile industry create and deliver high-value 5G use cases,” said Giampaolo Tardioli, vice president and general manager of Keysight’s network access group.

“Keysight’s 5G solutions enable an ecosystem of network infrastructure providers and mobile operators to thoroughly validate the performance of virtualized 5G radio access and core network functionalities across different radio and optical interfaces.”

Keysight’s UEE solutions enable users to accelerate verification of a RAN, both over the air and via O-RAN interfaces. Integrated sophisticated channel emulation capabilities allow users to verify the performance of a RAN deployed in a complex radio environment.

Users can access a comprehensive range of real-world scenarios in any 3GPP-specified frequency band for both protocol and load testing.

“With the Aerial SDK, we’re helping the telco industry meet the growing computing demands of 5G signal processing by taking advantage of the massive compute capabilities of our programmable GPUs,” said Chris Lamb, vice president of compute software at NVIDIA.

“By working with Keysight to validate Aerial’s O-RAN compatible radio interface, we have access to their cutting-edge tools, developer support and 5G expertise.”