AWS Network Firewall: Network protection across all AWS workloads

Amazon Web Services announced the general availability of AWS Network Firewall, a new managed security service that makes it easier for customers to enable network protections across all of their AWS workloads. Customers can enable AWS Network Firewall in their desired Amazon Virtual Private Cloud (VPC) environments with just a few clicks in the AWS Console, and the service automatically scales with network traffic to provide high availability protections without the need to set up … More

The post AWS Network Firewall: Network protection across all AWS workloads appeared first on Help Net Security.

New infosec products of the week: October 30, 2020

Confluera 2.0: Enhanced autonomous detection and response capabilities to protect cloud infrastructure

Confluera XDR delivers a purpose-built cloud workload detection and response solution with the unique ability to deterministically track threats progressing through the environment. Confluera holistically integrates security signals from the environment to provide a complete attack narrative of a cyberattack in real-time, as opposed to showing isolated alerts.

infosec products October 2020

Aqua Security unveils Kubernetes-native security capabilities

Aqua Security’s new Kubernetes security solution addresses the complexity and short supply of engineering expertise required to configure Kubernetes infrastructure effectively and automatically, by introducing KSPM – Kubernetes Security Posture Management – a coherent set of policies and controls to automate secure configuration and compliance.

infosec products October 2020

AWS Nitro Enclaves: Create isolated environments to protect highly sensitive workloads

AWS Nitro Enclaves helps customers reduce the attack surface for their applications by providing a trusted, highly isolated, and hardened environment for data processing. Each Enclave is a virtual machine created using the same Nitro Hypervisor technology that provides CPU and memory isolation for Amazon EC2 instances, but with no persistent storage, no administrator or operator access, and no external networking.

infosec products October 2020

GrammaTech CodeSentry: Identifying security blind spots in third party code

GrammaTech announced CodeSentry, which performs binary software composition analysis to inventory third party code used in custom developed applications and detect vulnerabilities they may contain. CodeSentry identifies blind spots and allows security professionals to measure and manage risk quickly and easily throughout the software lifecycle.

infosec products October 2020

Protegrity Data Protection Platform enhancements help secure sensitive data across cloud environments

Built for hybrid-cloud and multi-cloud serverless computing, Protegrity’s latest platform enhancements allow companies to deploy and update customized policies across geographies, departments, and digital transformation programs. Protegrity enables businesses to turn sensitive data into intelligence-driven insights to monetize data responsibly, and support vital AI and ML initiatives.

infosec products October 2020

AWS adds new S3 security and access control features

Amazon Web Services (AWS) has made available three new S3 (Simple Storage Service) security and access control features:

  • Object Ownership
  • Bucket Owner Condition
  • Copy API via Access Points

Object Ownership

Object Ownership is a permission that can be set when creating a new object within an S3 bucket, to enforce the transfer of new object ownership onto the bucket owner.

AWS S3 security

“With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources,” explained Jeff Barr, Chief Evangelist for AWS.

But with this set up, the bucket owner doesn’t have full control over the objects in the bucket and therefore cannot use bucket policies to share and manage objects. If the object uploader needs retain access to it, bucket owners will need to grant additional permissions to the uploading account.

“Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics,” Barr added.

Bucket Owner Condition

Bucket Owner Condition allows bucket owners to confirm the ownership when they create a new object or perform other S3 operations.

AWS recommends using Bucket Owner Condition whenever users perform a supported S3 operation and know the account ID of the expected bucket owner.

The feature eliminates the risk of users accidentally interacting with buckets in the wrong AWS account. For example, it prevents situations like applications writing production data into a bucket in a test account.

Copy API via Access Points

S3 Access Points are “unique hostnames that customers create to enforce distinct permissions and network controls for any request made through the access point. Customers with shared data sets […] can easily scale access for hundreds of applications by creating individualized access points with names and permissions customized for each application.”

The feature can now be used together with the S3 CopyObject API, allowing customers to copy data to and from access points within an AWS Region.

New infosec products of the week: October 2, 2020

Cohesity SiteContinuity: Protecting business-critical apps across a single platform

Cohesity SiteContinuity is an automated disaster recovery solution that is integrated with the company’s backup and continuous data protection capabilities — making it the only web-scale, converged solution to protect applications across tiers, service levels, and locations on a single platform.

infosec products October 2020

Stealthbits SbPAM 3.0: A modernized and simplified approach to PAM

SbPAM 3.0 continues Stealthbits’ commitment to renovate and simplify PAM. The company approaches PAM from the perspective of the abundance of privileged activities that need to be performed, not a group of privileged admins needing accounts.

infosec products October 2020

BullGuard 2021 security suite features multi-layered protection

The BullGuard 2021 security suite empowers consumers to confidently perform sensitive online transactions in absolute safety and rest assured knowing cyber threats are stopped dead in their tracks. BullGuard 2021 blocks malicious behavior before it can do damage, even when malware attempts to intentionally take a consumer’s device offline.

infosec products October 2020

Siemens Energy MDR defends energy companies against cyberattacks

MDR’s technology platform, Eos.ii, leverages AI and machine learning methodologies to gather and model real-time energy asset intelligence. This allows Siemens Energy’s cybersecurity experts to monitor, detect and uncover attacks before they execute.

infosec products October 2020

Fleek launches Space, an open source, private file storage and collaboration platform

Space’s mission is to enable a fully private, peer to peer (p2p) file and work collaboration experience for users. Space is built on Space Daemon, the open source framework, and backend of the platform. Space Daemon enables other apps, similar to Space, to build privacy-focused, encrypted p2p apps.

infosec products October 2020

AWS launches Amazon Timestream, a serverless time series database for IoT and operational applications

Amazon Timestream simplifies the complex process of data lifecycle management with automated storage tiering that stores recent data in memory and automatically moves historical data to a cost-optimized storage tier based on predefined user policies.

infosec products October 2020

AWS launches Amazon Timestream, a serverless time series database for IoT and operational applications

Amazon Web Services (AWS) announced the general availability of Amazon Timestream, a new time series database for IoT and operational applications that can scale to process trillions of time series events per day up to 1,000 times faster than relational databases, and at as low as 1/10th the cost.

Amazon Timestream

Amazon Timestream saves customers effort and expense by keeping recent data in-memory and moving historical data to a cost-optimized storage tier based upon user-defined policies, while its query processing gives customers the ability to access and combine recent and historical data transparently across tiers with a single query, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier.

Amazon Timestream’s analytics features provide time series-specific functionality to help customers identify trends and patterns in data in near real time. Because Amazon Timestream is serverless, it automatically scales up or down to adjust capacity based on load, without customers needing to manage the underlying infrastructure.

There are no upfront costs or commitments required to use Amazon Timestream, and customers pay only for the data they write, store, or query.

Today’s customers want to build IoT, edge, and operational applications that collect, synthesize, and derive insights from enormous amounts of data that change over time (known as time series data). For example, manufacturers might want to track IoT sensor data that measure changes in equipment across a facility, online marketers might want to analyze clickstream data that capture how a user navigates a website over time, and data center operators might want to view data that measure changes in infrastructure performance metrics.

This type of time series data can be generated from multiple sources in extremely high volumes, needs to be cost-effectively collected in near real time, and requires efficient storage that helps customers organize and analyze the data.

To do this today, customers can either use existing relational databases or self-managed time series databases. Neither of these options are attractive. Relational databases have rigid schemas that need to be predefined and are inflexible if new attributes of an application need to be tracked.

For example, when new devices come online and start emitting time series data, rigid schemas mean that customers either have to discard the new data or redesign their tables to support the new devices, which can be costly and time-consuming.

In addition to rigid schemas, relational databases also require multiple tables and indexes that need to be updated as new data arrives and lead to complex and inefficient queries as the data grows over time.

Additionally, relational databases lack the required time series analytical functions like smoothing, approximation, and interpolation that help customers identify trends and patterns in near real time.

Alternatively, time series database solutions that customers build and manage themselves have limited data processing and storage capacity, making them difficult to scale. Many of the existing time series database solutions fail to support data retention policies, creating storage complexity as data grows over time.

To access the data, customers must build custom query engines and tools, which are difficult to configure and maintain, and can require complicated, multi-year engineering initiatives. Furthermore, these solutions do not integrate with the data collection, visualization, and machine learning tools customers are already using today. The result is that many customers just don’t bother saving or analyzing time series data, missing out on the valuable insights it can provide.

Amazon Timestream addresses these challenges by giving customers a purpose-built, serverless time series database for collecting, storing, and processing time series data. Amazon Timestream automatically detects the attributes of the data, so customers no longer need to predefine a schema.

Amazon Timestream simplifies the complex process of data lifecycle management with automated storage tiering that stores recent data in memory and automatically moves historical data to a cost-optimized storage tier based on predefined user policies.

Amazon Timestream also uses a purpose-built adaptive query engine to transparently access and combine recent and historical data across tiers with a single SQL statement, without having to specify which storage tier houses the data. This enables customers to query all of their data using a single query without requiring them to write complicated application logic that looks up where their data is stored, queries each tier independently, and then combines the results into a complete view.

Amazon Timestream provides built-in time series analytics, with functions for smoothing, approximation, and interpolation, so customers don’t have to extract raw data from their databases and then perform their time series analytics with external tools and libraries or write complex stored procedures that not all databases support.

Amazon Timestream’s serverless architecture is built with fully decoupled data ingestion and query processing systems, giving customers virtually infinite scale and the ability to grow storage and query processing independently and automatically, without requiring customers to manage the underlying infrastructure.

In addition, Amazon Timestream integrates with popular data collection, visualization, and machine learning tools that customers use today, including services like AWS IoT Core (for IoT data collection), Amazon Kinesis and Amazon MSK (for streaming data), Amazon QuickSight (for serverless Business Intelligence), and Amazon SageMaker (for building, training, and deploying machine learning models quickly), as well as open source, third-party tools like Grafana (for observability dashboards) and Telegraf (for metrics collection).

“What we hear from customers is that they have a lot of insightful data buried in their industrial equipment, website clickstream logs, data center infrastructure, and many other places, but managing time series data at scale is too complex, expensive, and slow,” said Shawn Bice, VP, Databases, AWS.

“Solving this problem required us to build something entirely new. Amazon Timestream provides a serverless database service that is purpose-built to manage the scale and complexity of time series data in the cloud, so customers can store more data more easily and cost effectively, giving them the ability to derive additional insights and drive better business decisions from their IoT and operational monitoring applications.”

Amazon Timestream is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland), with availability in additional regions in the coming months.

The Guardian Life Insurance Company of America® (Guardian Life) is a Fortune 250 mutual company and a leading provider of life, disability, dental, and other benefits for individuals, at the workplace, and through government sponsored programs.

“Our team is building applications that collect and process metrics from our build systems and artifact repositories. We currently store this data in a self-hosted time series database,” said Eric Fiorillo, Head of Application Platform Strategy, Guardian Life.

“We started evaluating Amazon Timestream for storing and processing this data. We’re impressed with Amazon Timestream’s serverless, autoscaling, and data lifecycle management capabilities. We’re also thrilled to see that we can visualize our time series data stored in Amazon Timestream with Grafana.”

Autodesk is a global leader in software for architecture, engineering, construction, media and entertainment, and manufacturing industries. “At Autodesk, we make software for people who make things. This includes everything from buildings, bridges, roads, cars, medical devices, and consumer electronics, to the movies and video games that we all know and love,” said Scott Reese, SVP of Manufacturing, Cloud, and Production Products, Autodesk.

“We see that Amazon Timestream has the potential to help deliver new workflows by providing a cloud-hosted, scalable time series database. We anticipate that this will improve product performance and reduce waste in manufacturing. The key differentiator that excites us is the promise that this value will come without adding a data management burden for the customers nor Autodesk.”

PubNub’s Realtime Communication Platform processes trillions of messages per month on behalf of thousands of customers and millions of end users.

“To effectively operate the PubNub platform it is essential to monitor the enormous number of high-cardinality metrics that this traffic generates. As our traffic volumes and the number of tracked metrics have grown over time the challenges of scaling our self-managed monitoring solution have grown as well, and it is prohibitively expensive for us to use a SaaS monitoring solution for this data. Amazon Timestream has helped address both of these needs perfectly,” said Dan Genzale, Director of Operations, PubNub.

“We’ve been working with AWS as a Timestream preview customer, providing feedback throughout the preview process. AWS has built an amazing product in Timestream, in part by incorporating PubNub’s feedback. We truly appreciate the fully-managed and autoscaling aspects that we have come to expect of AWS services, and we’re delighted that we can use our existing visualization tools with Amazon Timestream.”

Since 1998, Rackspace Technology has delivered enterprise-class hosting, professional services, and managed public cloud for businesses of all sizes and kinds around the world.

“At Rackspace, we believe Amazon Timestream fills a longstanding need for a fully managed service to capture time series data in a cloud native way. In our work with Amazon Timestream we’ve observed the platform to be performant and easy to use, with a developer experience that is familiar and consistent with other AWS services,” said Eric Miller, Senior Director of Technical Strategy, Rackspace Technology.

“Cloud Native and IoT are both core competencies for us, so we’re very pleased to see that Amazon Timestream is 100% serverless, and that it has tight integration with AWS IoT Core rule actions to easily ingest data without any custom code. Organizations who have a use case to capture and process time series data should consider using AWS Timestream as a scalable and reliable solution.”

Cake is a performance marketing software company that stores and analyzes billions of clickstream events. “Previously we used a DIY time series solution that was cumbersome to manage and was starting to tip over at scale,” said Tyler Agee, Principal Architect, Cake Software.

“When we heard AWS was building a time series database service—Amazon Timestream—we signed up for the preview and started testing our workloads. We’ve worked very closely with the AWS service team, giving them feedback and data on our use case to help ensure Amazon Timestream really excels in production for the size and scale of time series data we’re dealing with.

“The result is phenomenal—a highly scalable and fully serverless database. It’s the first time we’ve had a single solution for our time series data. We’re looking forward to continuing our close work with AWS and cannot wait to see what’s in store for Amazon Timestream.”

Trimble Inc., is a leading technology provider of productivity solutions for the construction, resources, geospatial, and transportation industries. “Whenever possible, we leverage AWS’s managed service offerings. We are excited to now use Amazon Timestream as a serverless time series database supporting our IoT monitoring solution,” said David Kohler, Engineering Director, Trimble. “Timestream is purpose-built for our IoT-generated time series data, and will allow us to reduce management overhead, improve performance, and reduce costs of our existing monitoring system.”

With over 60 years of fashion retailing experience, River Island is one of the most well known and loved brands with over 350 stores across Europe, Asia, and the Middle East, and six dedicated online sites operating in four currencies.

“The Cloud Engineering team have been excited about the release of Amazon Timestream for some time. We’ve struggled to find a time series data store that is simple, easy, and affordable,” said Tonino Greco, Head of Cloud and Infrastructure, River Island.

“With Amazon Timestream we get that and more. Amazon Timestream will enable us to build a central monitoring capability across all of our heritage systems, as well as our AWS hosted microservices. Interesting times!”

D2L is a global leader in educational technology, and the pioneer of the Brightspace learning platform used by customers in K-12, higher education, healthcare, government, and the corporate sector.

“Our team is excited to use Amazon Timestream for our internal synthetic monitoring tool, which currently stores data in a relational database,” said Andrew Alkema, Sr. Software Developer, D2L.

“By switching to Amazon Timestream, a fully managed time series database, we can maintain performance while reducing cost by over 80%. Timestream’s built-in storage tiering and configurable data retention policies are game-changers, and will save our team a lot of time spent on mundane activities.”

Fleetilla is a leading provider of end-to-end solutions for managing trailers, land-based intermodal containers, construction equipment, unpowered assets, and conventional commercial telematics for over-the-road vehicles.

“Fleetilla works with real-time telematics data from IoT devices around the world. Recently we saw a need to integrate a variety of different data feeds to provide a unified ‘single pane of glass’ view for complex mixed fleet environments. We are using Amazon Timestream to provide a cost-effective database system which will replace our existing complex solution composed of multiple other tools,” said Marc Wojtowicz, VP of IT and Cloud Services, Fleetilla.

“The fully managed Amazon Timestream service means less work for our DevOps team, the SDKs available in our preferred programming language mean simpler implementation for our developers, and the familiar SQL-based language means less learning curve for our data analysts. Timestream’s built-in scalability and analytics features allow us to offer faster and richer experiences to our customers, and the machine learning integration allows us to continue innovating and improving our services for our customers.”

CrowdStrike enhances services for AWS

CrowdStrike announced the expansion of support for Amazon Web Services (AWS) with new capabilities that deliver integrations for the compute services and cloud services categories. Through these expanded services, CrowdStrike is enhancing development, security and operations (DevSecOps) to enable faster and more secure innovation that is easier to deploy.

The expanded capabilities that CrowdStrike is delivering support the growing needs of today’s cloud-first businesses that are conducting business and innovating in the cloud. The CrowdStrike Falcon platform delivers advanced threat protection and comprehensive visibility that scale to secure cloud workloads and container deployments across organizations.

This enables enterprises to accelerate their digital transformation while protecting their businesses against the nefarious activity of sophisticated threat actors. The expanded support delivers customers comprehensive insight across different compute services, secure communication across deployment fleet, automatic workload discovery and comprehensive cloud visibility across multiple accounts.

“As security becomes an earlier part of the development cycle, development teams must be equipped with solutions that allow them to quickly and effectively build from the ground up the strength and protection needed for the evolving threat landscape,” said Amol Kulkarni, chief product officer of CrowdStrike. “Through our growing integrations with our strong collaboration with AWS, CrowdStrike is providing security teams the scale and tools needed to adopt, innovate and secure technology across any workload with speed and efficiency, making it easier to address security issues in earlier phases of development and providing better, holistic protection and uptime for end users.”

AWS Graviton – CrowdStrike provides cloud-native workload protection for Amazon Elastic Compute Cloud (Amazon EC2) A1 instances powered by AWS Graviton Processors, as well as the C6g, M6g and R6g Amazon EC2 instances based on the new Graviton2 Processors. With the Falcon lightweight agent, customers receive the same seamless protection and visibility across different compute instance types with minimal impact on runtime performance. CrowdStrike Falcon secures Linux workloads running on ARM with no requirements for reboots, “scan storms” or invasive signature updates.

Amazon WorkSpaces – Amazon WorkSpaces is a fully managed, Desktop-as-a-Service (DaaS) solution that provides users with either Windows or Linux desktops in just a few minutes and can quickly scale to provide thousands of desktops to workers across the globe. CrowdStrike brings its industry-leading prevention and detection capabilities that include machine learning (ML), exploit prevention and behavioral detections to Amazon WorkSpaces, supporting remote workforces without affecting business continuity.

Bottlerocket – Bottlerocket, a new Linux-based open source operating system purpose-built by AWS for running containers on virtual machines or bare metal hosts and designed to improve security and operations of organizations’ containerized infrastructure. CrowdStrike Falcon will provide run-time protection, unparalleled endpoint detection and response (EDR) visibility and container awareness, enabling customers to further secure their applications running on Bottlerocket.

Essential features of security automation for the AWS platform

DevSecOps tactics and tools are dramatically changing the way organizations bring their applications to fruition. Having a mindset that security must be incorporated into every stage of the software development lifecycle – and that everyone is responsible for security – can reduce the total cost of software development and ensure faster release of secure applications.

security automation AWS

A common goal of any security strategy is to resolve issues quickly and safely before they can be exploited for a breach resulting in data loss. Application developers are not security specialists, and likely do not have the knowledge and skills to find and fix security issues in a timely manner. This is where security automation can help.

Security automation uses tools to continuously scan, detect, investigate, and remediate threats and vulnerabilities in code or the application environment, with or without human intervention. Tools scale the process of incorporating security into the DevSecOps process without requiring an increase in human skills or resources. They do this by automatically putting up safety rails around the issue whenever they find something that is a clear and obvious violation of security policy.

The AWS cloud platform is ripe for security automation

Amazon claims to have more than a million customers on its cloud computing platform, mostly small and mid-size companies but also enterprise-scale users. Regardless of customer size, Amazon has always had a model of shared responsibility for security.

Amazon commits to securing every component under its control. Customers, however, are responsible for securing what they control, which includes configurations, code, applications, and most importantly, data. This leaves a lot of opportunity for misconfigurations, insecure code, vulnerable APIs, and poorly secured data that can all lead to a data breach.

A common security problem in AWS is an open S3 storage bucket where data is publicly readable on the Internet. Despite the default configuration of S3 buckets being private, it’s fairly easy for developers to change policies to be open and for that permission change to apply in a nested fashion. A security automation tool should be able to find and identify this insecure configuration and simply disable public access to the resource without requiring human intervention.

Amazon added such tools in 2017 and again in 2018, yet we keep seeing headlines of companies whose data has been breached due to open S3 buckets. The security tool should communicate to the appropriate teams, but in many situations based on the sensitive contents of the data, the tool should also auto-remediate the misconfigured access policies. Teams that embrace security automation can also use this type of alerting and auto-remediation to become more aware of issues in their code or environment and, hopefully, head them off before they occur again.

What else can be auto-remediated? There are hundreds of vulnerabilities in AWS that can and should be fixed without human intervention. Here are just a few examples:

  • AWS CloudTrail data-at-rest encryption levels
  • AWS CloudFront Distribution logging access control
  • AWS Elastic Block Store access control
  • AWS S3 bucket access control
  • AWS S3 bucket ransomware exposure
  • AWS Simple Queue Service exposure

Essential features of a security automation tool

There are important categories of features of a security automation product for AWS. One category addresses data-in-motion with auto-remediation of API and queuing authentication and encryption. The other addresses data-at-rest with auto-remediation of database and storage encryption and backup. Security monitoring and enforcement are needed to automatically protect developers from making mistakes in how they are moving or storing data.

Here are four essential features to look for in a security automation tool.

1. Continuous discovery of shadow APIs within cloud, mobile, and web apps

APIs enable machine-to-machine data retrieval, essentially removing barriers and accelerating access to data. There is hardly a modern application today that doesn’t provide an API to integrate with other applications and data sources. A developer only needs to write a few lines of code to create an API. A shadow API is one that operates outside the purview of the security team. It’s a challenge to enforce security on code that is known only to the programmer. Thus, a security automation tool must have the ability to continuously scan for and discover APIs that may pose a security threat to prevent a data breach.

2. Full-stack security analysis of mobile and modern web apps

Before data gets taken up into the AWS cloud, it often starts at the client layer with a web or mobile app. Protecting user privacy and securing sensitive data is a continuous effort that requires vulnerability analysis from mobile to web to backend cloud services. Modern attackers often focus on exploiting the client layer to highjack user sessions, embedded passwords, and toxic tokens left inside mobile apps or single-page applications.

3. Automation fully integrated into the CI/CD pipeline with support for auto-remediation

Most vulnerability assessment tools integrate into the CI/CD pipeline by reporting what they find to systems such as Jira, Bugzilla and Jenkins. This is table stakes for assessment tools. What’s more valuable, however, is to include auto-remediation of the issues in the CI/CD pipeline. Instead of waiting for a human to make and verify the fix for the vulnerability, the tool does it automatically and reports the results to the ticketing system. This frees developers from having to spend time resolving common issues.

4. Automated vulnerability hacking toolkits for scheduled pre-production assessments

Companies often hire white hat hackers to do, basically, a moment-in-time penetration test in their pre-production environment. A more modern approach is to deploy a toolkit that continuously performs the same hacking activities. Not only is using such a toolkit much more cost effective, but it also works non-stop to find and fix vulnerabilities.

When auto-remediation may not be appropriate

Automatic remediation of some security issues isn’t always appropriate. Rather, it’s better that the tool simply discovers the issue and raises an alert to allow a person to decide how to resolve it. For example, auto-remediation is generally unsuitable when an encryption key is required, such as for a database, and for configurations that require user interactions, such as selecting a VPC or an IAM rule. It’s also not appropriate when the fix requires changes to existing code logic within the customer’s proprietary code base.

Nonetheless, some tools do aid in dealing with insecure code. One helpful feature that isn’t found in all security automation tools is the recognition of faulty code and recommendations on how to fix it with secure code. Seeing the recommended code fix in the pre-production stage helps resolve issues quickly without wasting time doing research on why the code is troublesome. Developers get to focus on their applications while security teams ensure continuous security validation.

Summary

AWS is a complex environment with many opportunities for misconfigurations and other issues that can lead to a data breach. Security automation with auto-remediation takes pressure off developers to find and fix a wide variety of vulnerabilities in code and configurations to help keep their organizations’ data safe.

Maximizing data privacy: Making sensitive data secure by default

Maximizing data privacy should be on every organization’s priority list. We all know how important it is to keep data and applications secure, but what happens when access to private data is needed to save lives? Should privacy be sacrificed? Does it need to be?

maximizing data privacy

Consider the case of contact tracing, which has become a key tool in the fight to control COVID-19. It’s a daunting task greatly facilitated by collecting and analyzing real-time identity and geo-location data gathered from mobile devices—sometimes voluntarily and sometimes not.

In most societies, such as the United States and the European Union, the use of location and proximity data by governments may be strictly regulated or even forbidden—implicitly impeding the ability to efficiently contain the spread of the virus. Where public health has been prioritized over data privacy, the use of automated tracing has contributed to the ability to quickly identify carriers and prevent disease spread. However, data overexposure remains a major concern for those using the application. They worry about the real threat that their sensitive location data may eventually be misused by bad actors, IT insiders, or governments.

What if it were possible to access the data needed to get contact tracing answers without actually exposing personal data to anyone anywhere? What if data and applications could be secure by default—so that data could be collected, stored, and results delivered without exposing the actual data to anyone except the people involved?

Unfortunately, current systems and software will never deliver the absolute level of data privacy required because of a fundamental hardware flaw: data cannot be simultaneously used and secured. Once data is put into memory, it must be decrypted and exposed to be processed. This means that once a bad actor or malicious insider gains access to a system, it’s fairly simple for that system’s memory and/or storage to be read, effectively exposing all data. It’s this data security flaw that’s at the foundation of virtually every data breach.

Academic and industry experts, including my co-founder Dr. Yan Michalevsky, have known for years that the ultimate, albeit theoretical, resolution of this flaw was to create a compute environment rooted in secure hardware. These solutions have already been implemented in cell phones and some laptops to secure storage and payments and they are working, well proving the concept works as expected.

It wasn’t until 2015 that Intel introduced Software Guard Extensions (SGX)—a set of security-related machine-level instruction codes built into their new CPUs. AMD has also added a similar proprietary instruction set called SEV technology into their CPUs. These new and proprietary silicon-level command sets enable the creation of encrypted and isolated parts of memory, and they establish a hardware root of trust that helps close the data security flaw. Such isolated and secured segments of memory are known as secure enclaves or, more generically, Trusted Execution Environments (TEEs).

A broad consortium of cloud and software vendors (called the Confidential Computing Consortium) is working to develop these hardware-level technologies by creating the tools and cloud ecosystems over which enclave-secured applications and data can run. Amazon Web Services announced its version of secure enclave technology, Nitro Enclaves, in late 2019. Most recently, both Microsoft (Azure confidential computing) and Google announced their support for secure enclaves as well.

These enclave technologies and secure clouds should enable applications, such as COVID-19 contact tracing, to be implemented without sacrificing user privacy. The data and application enclaves created using this technology enable sensitive data to be processed without ever exposing either the data or the computed results to anyone but the actual end user. This means public health organizations can have automated contact tracing that can identify, analyze, and provide needed alerts in real-time—while simultaneously maximizing data privacy.

Creating or shifting applications and data to the secure confines of an enclave can take a significant investment of time, knowledge, and tools. That’s changing quickly. New technologies are becoming available that will streamline the operation of moving existing applications and all data into secure enclaves without modification.

As this happens, all organizations will be able to secure all data by default. This will enable CISOs, security professionals—and public health officials—to sleep soundly, knowing that private data and applications in their care will be kept truly safe and secure.

Modshield SB application firewall now available in the AWS Marketplace

StrongBox IT released its flagship application firewall – Modshield SB, now available in the AWS Marketplace on a cloud subscription model and a Bring Your Own License (BYOL) model.

Modshield SB

A feature-rich, scalable and cost-effective application firewall, Modshield SB is designed to provide protection against all major attack vectors (OWASP Top 10 and more). It supports multiple domains and applications using a single instance with no additional license costs.

With curated threat intelligence updated continuously, Modshield SB offers real-time protection from Bots, Crawlers, Spiders, Bad IPs and Tor IPs. The built-in Denial of Service (DoS) protection also helps safeguard the application from random repetitive attacks ensuring availability of applications.

Modshield SB uses Modsecurity and OWASP Core ruleset as its engine and StrongBox IT’s proprietary code as a UI and enablement wrapper.

“With proven core components and a host of user-friendly features, Modshield SB is going to be an integral tool for businesses looking to implement a first line of defence for their business systems without any compromise in cost economies. Modshield SB is a user-friendly, robust security solution available right within AWS Marketplace, and by far one of the most affordable too,” said Joseph Martin, Strongbox IT’s CEO.

Key features:

  • With Modsecurity and OWASP Core ruleset at its heart, Modshield makes it simple and efficient to protect web and mobile applications from OWASP Top 10 application risks
  • Modshield SB supports both HTTP and HTTPS traffic for applications hosted in a single server or load balanced across multiple servers
  • Allows addition of multiple domains and applications using a single instance at no additional cost
  • Curated threat intelligence is updated continuously for real-time protection
  • Built-in Denial of Service (DoS) protection helps safeguard the application from random repetitive attacks
  • Bot, Crawlers, Spider Protection
  • Unlimited number of custom rules for monitoring traffic to and from applications
  • Log management capabilities allow for external archival, FTP transfers and real-time log forwarding to external monitoring systems
  • Modshield has a user-friendly dashboard making available KPIs useful for monitoring and for regulatory compliance
  • Data Loss Protection (DLP), a beta feature allows definition of sensitive data thereby protecting specific data leakages
  • Modshield SB is currently available as an AMI on the AWS Marketplace and as a Virtual Machine (VM) for physical implementations.

New infosec products of the week: July 31, 2020

Qualys unveils Multi-Vector EDR, a new approach to endpoint detection and response

Traditional EDR solutions singularly focus on endpoints’ malicious activities to hunt and investigate cyberattacks. Qualys’ multi-vector approach provides critical context and full visibility into the entire attack chain to provide a comprehensive, more automated and faster response to protect against attacks.

infosec products July 2020

McAfee MVISION Cloud now maps threats to MITRE ATT&CK

With the introduction of ATT&CK into McAfee MVISION Cloud, there is no longer the need to manually sort and map incidents to a framework like ATT&CK or to learn and operationalize a separate framework for cloud threats and vulnerabilities, which can be cumbersome and time consuming – especially as cloud-native threats become more abundant.

infosec products July 2020

Amazon Fraud Detector: Use machine learning in the fight against online fraud

Amazon Fraud Detector is a fully managed service that makes it easy to quickly identify potentially fraudulent online activities like online payment and identity fraud. With just a few clicks in the Amazon Fraud Detector console, customers can select a pre-built machine learning model template, upload historical event data, and create decision logic to assign outcomes to the predictions.

infosec products July 2020

Veritas is unifying data protection, from the edge to core to cloud

Veritas Technologies introduced new innovations to its Enterprise Data Services Platform to help customers reduce risk, optimize cost, strengthen ransomware resiliency, and manage multi-cloud environments at scale. With the launch of NetBackup 8.3, Veritas empowers enterprise customers by improving the resiliency of their applications and infrastructure regardless of the context.

infosec products July 2020

Sonrai Dig maps relationships between identities and data inside public clouds

Sonrai Security announced the Governance Automation Engine for Sonrai Dig, re-inventing how customers ensure security in AWS, Azure, Google Cloud and Kubernetes by automatically eliminating identity risks and reducing unwanted access to data.

infosec products July 2020

Pulse Zero Trust Access simplifies management and mitigates cyber risks

Pulse Zero Trust Access simplifies access management with single-pane-of-glass visibility, end-to-end analytics, granular policies, automated provisioning, and advanced threat mitigation that empowers organizations to further optimize their increasingly mobile workforce and hybrid IT resources.

infosec products July 2020

CyberStrong platform updates allow customers to dynamically manage their risk posture

The updates reinforce CyberSaint’s mission to enable organizations to manage cybersecurity as a business function by enabling agility, measurement, and automation across risk, compliance, audit, vendor, and governance functions for information security organizations.

infosec products July 2020

Public cloud environments leave numerous paths open for exploitation

As organizations across industries rapidly deploy more assets in the public cloud with Amazon, Microsoft, and Google, they’re leaving numerous paths open for exploitation, according to Orca Security.

public cloud exploitation

Cloud estates are being breached through their weakest links of neglected internet-facing workloads, widespread authentication issues, discoverable secrets and credentials, and misconfigured storage buckets.

While public cloud providers such as AWS, Microsoft Azure, and Google Cloud Platform keep their platforms secure, customers are still responsible for securing the workloads, data, and processes they run inside the cloud – just as they do in their on-prem world.

Such shared responsibility poses a serious challenge due to the speed and frequency of public cloud deployments. For most organizations, cloud workload security is dependent upon the installation and maintenance of security agents across all assets. However, IT security teams are not always informed of cloud deployments, so this lack of visibility results in missed vulnerabilities and attack vectors.

While organizations must secure their entire estate, attackers only need to find a single weak link to exploit,” said Avi Shua, CEO, Orca Security. “It’s imperative for organizations to have 100 percent public cloud visibility and know about all neglected assets, weak passwords, authentication issues, and misconfigurations to prioritize and fix. The Orca Security 2020 State of Public Cloud Security Report shows how just one gap in cloud coverage can lead to devastating data breaches.”

Neglected internet-facing workloads

Attackers look for vulnerable frontline workloads to gain entrance to cloud accounts and expand laterally within the environment. While security teams need to secure all public cloud assets, attackers only need to find one weak link.

  • The study found more than 80 percent of organizations have at least one neglected, internet-facing workload – meaning it’s running on an unsupported operating system or has remained unpatched for 180 days or more
  • Meanwhile, 60 percent have at least one neglected internet-facing workload that has reached its end of life and is no longer supported by manufacturer security updates
  • 49 percent of organizations have at least one publicly accessible, unpatched web server despite increased awareness of how that can result in large data breaches

Authentication and credential issues

Weak security authentication is another way that attackers breach public cloud environments. Researchers found that authentication and password storage issues are commonplace.

  • Almost half the organizations (44 percent) have internet-facing workloads containing secrets and credentials that include clear-text passwords, API keys, and hashed passwords that allow lateral movement across their environment
  • Meanwhile, 24 percent have at least one cloud account that doesn’t use multi-factor authentication for the super admin user; 19 percent have cloud assets accessible via non- corporate credentials
  • Additionally, five percent have cloud workloads that are accessible using either a weak or leaked password

public cloud exploitation

Lateral movement risk

All weak links combine to pose serious cloud security and lateral movement attack risk for any organization. Attackers also take advantage of knowing that internal servers are less protected than external internet-facing servers and that they can expand rapidly in search of critical data once inside a cloud estate.

  • The security posture of internal machines is much worse than internet-facing servers, with 77 percent of organizations having at least 10 percent of their internal workloads in a neglected security state
  • Additionally, six percent of internet-facing assets contain SSH keys that could be used to access adjacent systems

Amazon Fraud Detector: Use machine learning in the fight against online fraud

Amazon Fraud Detector is a fully managed service that makes it easy to quickly identify potentially fraudulent online activities like online payment and identity fraud.

Amazon Fraud Detector

Using machine learning under the hood and based on over 20 years of fraud detection expertise from Amazon, Amazon Fraud Detector automatically identifies potentially fraudulent activity in milliseconds—with no machine learning expertise required. With just a few clicks in the Amazon Fraud Detector console, customers can select a pre-built machine learning model template, upload historical event data, and create decision logic to assign outcomes to the predictions (e.g. initiate a fraud investigation when the machine learning model predicts potentially fraudulent activity).

There are no up-front payments, long-term commitments, or infrastructure to manage with Amazon Fraud Detector, and customers pay only for their actual usage of the service.

Today, tens of billions of dollars are lost to online fraud every year by organizations around the world. As a result, many businesses invest in large, expensive fraud management systems. These systems are often based on hand-coded rules that are time-consuming to set up, expensive to customize, and difficult to keep up-to-date as fraud patterns change—all of which leads to lower accuracy. This leads organizations to reject good customers as fraudsters, conduct more costly fraud reviews, and miss opportunities to drive down fraud rates.

Amazon has made significant investments over the past 20 years to combat fraudulent activity using sophisticated machine learning techniques that minimize customer friction while staying one step ahead of bad actors, and customers have asked Amazon to share this expertise and experience to help them combat online fraud.

Amazon Fraud Detector provides a fully managed service that uses machine learning for detecting potential fraud in real time (e.g. online payment and identity fraud, the creation of fake accounts, loyalty account and promotion code abuse, etc.), based on the same technology used by Amazon.com—with no machine learning experience required. With Amazon Fraud Detector, customers use their historical data of both fraudulent and legitimate transactions to build, train, and deploy machine learning models that provide real-time, low-latency fraud risk predictions.

To get started, customers upload historical event data (e.g. transactions, account registrations, loyalty points redemptions, etc.) to Amazon Simple Storage Service (Amazon S3), where it is encrypted in transit and at rest and used to customize the model’s training. Customers only need to provide any two attributes associated with an event (e.g. logins, new account creation, etc.) and can optionally add other data (e.g. billing address or phone number).

Based upon the type of fraud customers want to predict, Amazon Fraud Detector will pre-process the data, select an algorithm, and train a model. Amazon Fraud Detector uses machine learning models based on Amazon’s 20+ years of experience with fraud to help identify patterns commonly associated with fraudulent activity. This improves the accuracy of the trained model even if the number of fraudulent examples provided by a customer to Amazon Fraud Detector is low.

Amazon Fraud Detector trains and deploys a model to a fully managed, private Application Programming Interface (API) end point. Customers can send new activity (e.g. signups or new purchases) to the API and receive a fraud risk response, which includes a precise fraud risk score. Based on the report, a customer’s application can determine the right course of action (e.g. accept a purchase, or pass it to a human for review). With Amazon Fraud Detector, customers can detect fraud more quickly, easily, and accurately with machine learning while also preventing fraud from happening in the first place.

“Customers of all sizes and across all industries have told us they spend a lot of time and effort trying to decrease the amount of fraud occurring on their websites and applications,” said Swami Sivasubramanian, Vice President, Amazon Machine Learning, Amazon Web Services Inc. “By leveraging 20 years of experience detecting fraud coupled with powerful machine learning technology, we’re excited to bring customers Amazon Fraud Detector so they can automatically detect potential fraud, save time and money, and improve customer experiences—with no machine learning experience required.”

Developers with machine learning experience who want to extend what Amazon Fraud Detector delivers can customize Amazon Fraud Detector using a combination of machine learning models built with Amazon Fraud Detector and those built with Amazon SageMaker (a fully managed service for building, training, and deploying machine learning models quickly). Amazon Fraud Detector is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Sydney), with availability in additional regions in the coming months.

“GoDaddy is committed to preventing fraudulent accounts, and we’re continually bolstering our capabilities to automatically detect such accounts during sign-up,” said John Kercheval, Senior Director, Identity Services Group at GoDaddy. “We recently began using Amazon Fraud Detector, and we’re pleased that it offers low cost of implementation and a self-service approach to building a machine learning model that is customized to our business. The model can be easily deployed and used in our new account process without impacting the signup experience for legitimate customers. The model we built with Amazon Fraud Detector is able to detect likely fraudulent sign-ups immediately, so we’re very pleased with the results and look forward to accomplishing more.”

Sonrai Dig maps relationships between identities and data inside public clouds

Sonrai Security announced the Governance Automation Engine for Sonrai Dig, re-inventing how customers ensure security in AWS, Azure, Google Cloud and Kubernetes by automatically eliminating identity risks and reducing unwanted access to data.

OPIS

This enables enterprise companies to achieve and maintain least privilege, enforce separation of duties, eliminate complex identity risks and lock down critical data. Workflow and role-based swimlanes route alerts and recommend actions to cloud, security, audit or DevOps teams, or deploy remediation bots to address security issues.

The new Governance Automation Engine helps enterprises address critical pain points including security breaches caused by identity policy misconfiguration and data risks that go beyond S3 buckets. It extends to include databases like Amazon RDS, DynamoDB, CosmosDB and many others, addressing disconnects among cloud, security, audit and DevOps teams with widely disparate cloud security toolsets.

“The acceleration of migrations from on-prem datacenters to the cloud presents an entirely new set of challenges for global enterprises that cannot be fully addressed by the security approaches of the past,” said Richard Stiennon, chief research analyst, IT-Harvest. “Security for public clouds must center on effective governance and security of three critical control points – identities, data and platform – to understand, monitor and minimize risk. Effective solutions will be those that go well beyond simply presenting dashboards of cloud provider tools and bring entirely new identity and data analytics to the mix.”

Cloud security complexity

For enterprise organizations, public cloud expansion quickly leads to hundreds of cloud accounts, thousands of data stores and tens of thousands of ephemeral pieces of compute involving multitudes of development teams. Improperly set up, this growing array of interdependencies and inheritances can open up many security risks such as over-permissioned identities, separation of duties risks and excessive access paths to critical data. Legacy cloud security tools have failed to address identity and data complexity and either miss critical vulnerabilities or send continuous alarms, creating high levels of noise that overwhelm security teams’ resources and lead to inaction.

Sonrai Dig

The Sonrai Dig platform builds a comprehensive graph detailing every relationship between identities (people and non-people) and data that exist within cloud platforms like AWS, Azure, GCP and Kubernetes. Analytics provided atop that graph allows users to understand risk, eliminate risk and monitor it continuously. Swimlane workflows enable escalations, certifications and risk-exception handling and provide role-based access control for workloads, teams and cloud platforms to ensure adherence to policy.

New automation capabilities

The Governance Automation Engine for Sonrai Dig automatically dispatches prevention and remediation bots and provides safeguards in the form of code promotion blocks. Helping to ensure end-to-end security in public cloud platforms, Sonrai Dig also fosters excellence in the application lifecycle and in DevOps by preventing users from promoting code to the next stage of the development cycle if public cloud security requirements are unmet.

Extensive integration ecosystem

Sonrai Dig and its growing integration ecosystem have worked closely to ensure cross-platform compatibility through API integrations including:

Public Cloud: AWS, Azure, Google Cloud (GCP), Kubernetes
IAM: AWS IAM, Azure AD, GCP IAM
Audit: AWS CloudTrail, Azure activity logs, GCP Stackdriver
Data Stores: DynamoDB, RDS, Cosmos DB, Data Lake, SQL, Big Table
Key Stores: KMS, HashiCorp Vault
Infrastructure: WAF, Cloudfront, ELB Compute: ECS, Lambda, Azure Serverless

“Enterprise companies’ explosive expansion of cloud-native development creates a dizzying number of ways people and non-people identities access corporate data, creating unacceptable risk,” said Brendan Hannigan, CEO, Sonrai Security. “Sonrai provides unique technology to find and eliminate all of these risks, in a way that aligns with how applications are developed in today’s world. Our swimlanes, workflow and remediation capabilities are integrated seamlessly to automatically de-risk complex environments and represent an entirely new and effective approach to security.”

Attackers exploit Twilio’s misconfigured cloud storage, inject malicious code into SDK

Twilio has confirmed that, for 8 or so hours on July 19, a malicious version of their TaskRouter JS SDK was being served from their one of their AWS S3 buckets.

Twilio malicious SDK

“Due to a misconfiguration in the S3 bucket that was hosting the library, a bad actor was able to inject code that made the user’s browser load an extraneous URL that has been associated with the Magecart group of attacks,” the company shared.

Who’s behind the attack?

Twilio is a cloud communications platform as a service (CPaaS) company, which provides web service APIs developers can use to add messaging, voice, and video in their web and mobile applications.

“The TaskRouter JS SDK is a library that allows customers to easily interact with Twilio TaskRouter, which provides an attribute-based routing engine that routes tasks to agents or processes,” Twilio explained.

The misconfigured AWS S3 bucket, which is used to serve public content from the domain twiliocdn.com, hosts copies of other SDKs, but only the TaskRouter SDK had been modified.

The misconfiguration allowed anybody on the Internet to read and write to the S3 bucket, and the opportunity was seized by the attacker(s).

“We do not believe this was an attack targeted at Twilio or any of our customers,” the company opined.

“Our investigation of the javascript that was added by the attacker leads us to believe that this attack was opportunistic because of the misconfiguration of the S3 bucket. We believe that the attack was designed to serve malicious advertising to users on mobile devices.”

Jordan Herman, Threat Researcher at RiskIQ, which detailed previous threat campaigns that used the same malicious traffic redirector, told Help Net Security that because of how easy misconfigured Amazon S3 buckets are to find and the level of access they grant attackers, they are seeing attacks like this happening at an alarming rate.

Om Moolchandani, co-founder and CTO at code to cloud security company Accurics, noted that there are many similarities between waterhole attacks and the Twilio incident.

“Taking over a cloud hosted SDK allows attackers to ‘cloud waterhole’ into the victim environments by landing directly into the operation space of victims,” he said.

The outcome

Due to this incident, Twillio checked the permissions on all of their AWS S3 buckets and found others that were misconfigured, but they stored no production or customer data and haven’t been tampered with.

“During our incident review, we identified a number of systemic improvements that we can make to prevent similar issues from occurring in the future. Specifically, our teams will be engaging in efforts to restrict direct access to S3 buckets and deliver content only via our known CDNs, improve our monitoring of S3 bucket policy changes to quickly detect unsafe access policies, and determine the best way for us to provide integrity checking so customers can validate that they are using known good versions of our SDKs,” the company shared.

They say it’s difficult to gauge the impact on the attack on individual users, since the “links used in these attacks are deprecated and rotated and since the script itself doesn’t execute on all platforms.”

The company urges those who have downloaded a copy of the TaskRouter JS SDK between July 19th, 2020 1:12 PM and July 20th, 10:30 PM PDT (UTC-07:00) to re-download it, check its integrity and replace it.

“If your application loads v1.20 of the TaskRouter JS SDK dynamically from our CDN, that software has already been updated and you do not need to do anything,” they pointed out.

Using confidential computing to protect Function-as-a-Service data

Organizations are embracing the power of Function-as-a-Service (FaaS). FaaS can be viewed as a very positive and beneficial result coming from years of data successfully migrating and operating in public clouds. AWS Lambda, Azure Functions and Google Cloud are today’s market leading platforms for enterprises to realize the power and benefits of FaaS.

Function-as-a-Service

FaaS likely won’t replace all an enterprise’s IT functions in public clouds but leveraging FaaS for most of the stateless business operations can help organizations realize the economies of scale and ROI from their public cloud deployments. But with FaaS emerging on the scene, organizations may wonder how best to protect their cloud data and orchestrate security in public clouds.

Enterprise key management services powered by secure enclaves are an effective approach to not only securely executing programs and business logic in a FaaS environment, but also enabling the entire execution to be protected and achieve the secure attributes of confidential computing. Secure enclaves enable enterprise key management services to secure data not only during runtime, but also to protect it if the hardware is ever compromised. This enables organizations to leverage the benefits of public clouds, but not make their security in the cloud public.

Enterprise key management services as a rule should be highly scalable, have built-in high availability and disaster recovery support. In addition, organizations looking to achieve the benefits of secure Function-as-a-Service should consider enterprise key management services that have the following features:

  • Enterprise key management and secrets management
  • Application encryption, tokenization and data masking
  • Multi-tenancy
  • Hardware security module (HSM) functionality with cloud-like scalability
  • FIPS 140-2 Level 3 certification

Secure Function-as-a-Service use cases

Enterprise key management services are powerful technologies for confidential computing that can help organizations decentralize and execute their most sensitive business logic outside of public clouds in a completely confidential manner. Popular use cases demonstrating how organizations are realizing these benefits today include:

Storing credit history in AWS

A large financial firm uploads its customers’ credit history and private data into AWS S3 containers protected by client-side encryption using an enterprise key management service. Using this approach, it can run confidential credit forecasting logic based on historical trends for each customer. It is assured during this analysis that if something cannot be compromised, it’s the security of this data in any stage – at-rest, in-transit and during runtime. The steps below give an example of how confidential computing can help protect private financial data:

1. AWS lambda function reads customers’ encrypted private information and credit record data from AWS S3.
2. AWS lambda function passes that information in JSON to the enterprise key management service where confidential credit forecasting logic is written in a secure enclave.
3. The enterprise key management service decrypts the AWS S3 information using the key from the enterprise key management service, runs business logic on it, and passes the encrypted result back to the Lambda function in JSON format.

Storing health records in Google Cloud Platform

A global healthcare organization saves a customer’s SSN in BigQuery encrypted by an enterprise key management service. Before approving the customer’s health record, its fraud detection application needs to compare this SSN with SSNs that may have been compromised recently. The health organization must gather the list of breached SSNs from a reputable third-party vendor. However, without confidential computing, such a computation in the public cloud could be risky. The steps below show how an enterprise key management service can help the health organization avoid this risk:

1. The health record fraud detection application running in Google Cloud Platform reads an enterprise key management service encrypted secret from BigQuery and sends the encrypted secret to the secure enclave.
2. An enterprise key management service decrypts it with the right key, calls out to the third party firm for a list of all breached SSN numbers, runs sensitive business logic and returns the Boolean response.
3. Based on the response, the health record fraud detection application takes further action.

Executing financial transaction across public clouds

A Fortune 50 bank can use both AWS and Azure to serve customers by running workloads across many regions. Its applications deployed in AWS and Azure talk to each other over TLS. However, there are certain transactions where the organization needs to transfer customers’ PINs from AWS to Azure. For security, that PIN not only needs to be encrypted with the AES key, but it also needs to be tokenized before it is received by another customer facing application hosted in Azure. The steps below give an example of how confidential computing can help this bank in this secure transaction:

1. The AWS application encrypts the PIN by using an enterprise key management service application encryption.
2. Then it sends the encrypted PIN to the secure enclave where it first decrypts the PIN using the same key and then tokenizes the PIN using the predefined token policy.
3. The enterprise key management service calls the Azure application and sends the tokenized PIN as a JSON response.

Providing a trusted execution environment for functions is a valuable feature of enterprise key management services that not only offers enterprises flexible key management and comprehensive data protection offerings, but also give them a way to apply on-demand confidentiality into multi-cloud workloads for even the most sensitive business logic. With enterprise key management services, organizations can be assured that their data and applications are confidential in public clouds and will stay private even if the hardware is compromised.

AWS Snowcone: Ultra-portable, secure edge computing, storage, and data transfer device

Amazon Web Services announced AWS Snowcone, a new small, ultra-portable, rugged, and secure edge computing and data transfer device.

AWS Snowcone

At under 5 lbs and able to fit in a standard mailbox or a small backpack, AWS Snowcone is the smallest member of the AWS Snow Family of devices, enabling customers to collect data, process it locally, and move it to AWS either offline (by shipping the device to AWS) or online (by using AWS DataSync to send the data to AWS over the network).

AWS Snowcone is built to withstand harsh conditions and is designed for a variety of use cases in environments outside of the traditional data center that lack consistent network connectivity and/or require portability, including healthcare, industrial IoT, drones, tactical edge computing, content distribution, data migration, video content creation, and transportation.

Millions of customers have applications that rely on AWS for data processing, analytics, storage, IoT, and machine learning capabilities. Increasingly, edge computing use cases (like machine learning inference, industrial IoT, and autonomous machines) drive customers’ need to move data capture, processing, and analysis closer to end users and devices.

Running applications in disconnected environments or connected edge locations can be challenging because often these locations lack the space, power, and cooling common in data centers.

Today, customers use the AWS Snow Family (a collection of physical devices designed to run outside the data center and ranging in size from the suitcase-sized AWS Snowball to the 45-foot long ruggedized shipping container AWS Snowmobile) to collect and process data, run local computing applications, and move large volumes of data from log files, digital media, genomic data, and sensor data from connected devices to AWS.

As customers seek to extend the reach of their cloud infrastructure into more edge locations, new use cases are emerging for edge computing in more situations like vehicles (land, sea, and air), industrial operations, and remote and austere sites that require a rugged and secure device with even greater portability and a smaller form factor than the AWS Snow Family of devices has traditionally provided.

AWS Snowcone provides maximum flexibility for edge computing environments, offering a small, ultra-portable, rugged, and military grade secure device to run applications and migrate data to AWS.

AWS Snowcone measures 9 inches x 6 inches x 3 inches (23 cm x 15 cm x 8 cm) and weighs 4.5 lbs (2.1 kg). The device can easily fit in a backpack or messenger bag, standard mailbox, or any type of vehicle, and is light enough to be carried by drone.

AWS Snowcone can move data to AWS offline by shipping the device (using its E Ink shipping label) and online using Ethernet or Wi-Fi with AWS DataSync, which are integrated into the device.

AWS Snowcone features 2 CPUs, 4 GB of memory, 8 TB of storage, and USB-C power (or optional battery). AWS Snowcone is designed to operate in extreme environments or disconnected remote sites (including oil rigs, first responder vehicles, military operations, factory floors, remote offices, hospitals, or movie theaters) for long periods of time without traditional data center conditions.

With support for AWS IoT Greengrass, the ability to run Amazon Elastic Compute Cloud (Amazon EC2) instances, and ample local storage, AWS Snowcone can be used as an IoT hub, data aggregation point, application monitor, or lightweight analytics engine.

“Thousands of our customers have found AWS Snowball devices to be ideal for collecting data and running applications in remote and harsh environments. Since 2015, customer use of Snowball devices has greatly increased, as has their need for an even smaller device with even greater portability,” said Bill Vass, VP of Storage, Automation and Management Services, AWS.

“With more applications running at the edge for an expanding range of use cases, like analyzing IoT sensor data and machine learning inference, AWS Snowcone makes it easier to collect, store, pre-process, and transfer data from harsh environments with limited space to AWS for more intensive processing.”

Like other AWS Snow Family devices, all data on AWS Snowcone is encrypted using military grade 256-bit keys that customers can manage using the AWS Key Management Service (KMS). Additionally, AWS Snowcone contains anti-tamper and tamper-evident features to help ensure data on the device stays secure during transit.

AWS Snowcone meets stringent ruggedization standards (it meets ISTA-3A, ASTM D4169, and MIL-STD-810G standards) for free-fall shock and operational vibration. The device is dust-tight and water resistant (it meets the IP65 International Protection Marking IEC standard).

AWS Snowcone has a wide operating temperature range from freezing (0 degrees C/32 degrees F) to desert-like conditions (38 degrees C/100 degrees F), and withstands even harsher temperatures when in storage or being shipped (-32 degrees C/-25.6 degrees F to 63 degrees C/145.4 degrees F).

Setting up an AWS Snowcone takes only 3 clicks, and once set-up, the device is easily used and managed through a simple-to-use graphical interface. AWS Snowcone is available in the US East (Northern Virginia) and US West (Oregon) AWS Regions, with availability planned in additional AWS Regions in the coming months.

Deluxe Entertainment Services (Deluxe) is the world’s leading video creation-to-distribution company offering global, end-to-end services and technology.

“Our work with AWS is a fundamental part of Deluxe’s strategy. One VZN, our new cloud-based IP delivery solution, is a leap in innovation for digital cinema distribution which has been stagnant over the past decade, fundamentally changing not only the economics of film distribution for exhibitors and studios around the globe, but enabling new theatrical experiences for viewers as well,” said Andy Shenkler, Chief Product and Technology Officer, Deluxe.

“The combination of AWS Snowcone, AWS DataSync, and Amazon S3 are essential to One VZN – the collective services distinguish themselves from any other cloud offerings today. The secure, compact AWS Snowcone device, in conjunction with Deluxe’s powerful cloud-based solutions create a truly differentiated solution of the industry.”

SmugMug+Flickr is the world’s largest and most influential photographer-centric platform. “For over a decade SmugMug has been storing billions of photos and videos for our millions of users in an accessible, available, and secure way using Amazon S3.

“With the introduction of AWS Snowcone, our customers now have a small, ultra-portable device with 8 terabytes of storage that allows them to capture more content during a photography or videography shoot, process the data locally, and then upload it to the cloud faster than ever,” Don MacAskill, CEO & Chief Geek, SmugMug and Flickr.

“AWS Snowcone’s small form factor and rugged, secure design allows us to simply ship these devices to customers so they can seamlessly copy and preserve their precious and valuable media whether in the comfort of their own home or out on a shoot off the grid in jungles, deserts, forests, mountains, in the middle of the ocean, and even in drone use cases.

“Customers can then ship the device back to AWS or retain the device for multiple shoots all while transferring their data online with AWS DataSync. AWS Snowcone helps our users and photographers effortlessly and flexibly share their passion, work, and inspiration with each other leveraging the SmugMug platform.”

Federated Wireless leads the industry in development of shared spectrum Citizens Broadband Radio Service (CBRS) capabilities. The company’s partner ecosystem includes more than 40 device manufacturers along with cloud edge partners, dedicated to collaborate for the advancement of CBRS services.

“The combination of AWS Snowcone and the Federated Wireless Connectivity-as-a-Service private wireless solution enables an expansive variety of new business models and indoor and outdoor scenarios at the edge of the cloud, and even beyond the edge of the network,” said Iyad Tarazi, President and CEO of Federated Wireless.

“The small form factor and robust capabilities of Snowcone provides us more flexibility in our technology architecture. The simplified, innovative delivery and proven cloud capabilities of the combined solution give organizations of all types and sizes the security, compute power, portability, and mobility they need to meet their most pressing business and operational objectives.”

Novetta is an advanced analytics solutions company focused on mission success for its customers in the public sector, defense, intelligence, and federal law enforcement communities.

“AWS is leading in the development of practical, mission-ready edge computing technology that we use to build solutions to help our public sector clients save lives. AWS Snowcone is a great example of AWS innovation for the edge,” said Rob Sheen, Senior Vice President, Client Operations at Novetta.

“Snowcone gives us a rugged, secure, and portable edge computing platform that we can use in disaster zones and austere edge locations. In our recent field exercises, Snowcone performed admirably as a sensor hub at the edge to track people and assets in a disaster zone.”

CommScope pushes the boundaries of technology to create the world’s most advanced wired and wireless networks. “Companies looking to deploy private LTE networks and need simple, easy install solutions will find AWS Snowcone, coupled with the CommScope CBRS portfolio, function as an optimal pairing in solving business challenges.

“Snowcone is a perfect fit for powering up 100s of edge network devices with private LTE mobility,” said Upendra Pingle, Vice President, Distributed Coverage and Capacity Solutions, CommScope.

“As an AWS technology partner, CommScope’s CBRS solutions, including our Spectrum Access System and Radio Access Network offerings, will easily integrate for quick, frictionless deployments that give users total control of their data.

“The Snowcone device enables us to continue driving further innovations in our CBRS solutions all while transforming the barrier to entry for private LTE networks.”

How to implement least privilege in the cloud

According to a recent survey of 241 industry experts conducted by the Cloud Security Alliance (CSA), misconfiguration of cloud resources is a leading cause of data breaches.

least privilege cloud

The primary reason for this risk? Managing identities and their privileges in the cloud is extremely challenging because the scale is so large. It extends beyond just human user identities to devices, applications and services. Due to this complexity, many organizations get it wrong.

The problem becomes increasingly acute over time, as organizations expand their cloud footprint without establishing the capability to effectively assign and manage permissions. As a result, users and applications tend to accumulate permissions that far exceed technical and business requirements, creating a large permissions gap.

Consider the example of the U.S. Defense Department, which exposed access to military databases containing at least 1.8 billion internet posts scraped from social media, news sites, forums and other publicly available websites by CENTCOM and PACOM, two Pentagon unified combatant commands charged with US military operations across the Middle East, Asia, and the South Pacific. It configured three Amazon Web Services S3 cloud storage buckets to allow any AWS global authenticated user to browse and download the contents; AWS accounts of this type can be acquired with a free sign-up.

Focus on permissions

To mitigate risks associated with the abuse of identities in the cloud, organizations are trying to enforce the principle of least privilege. Ideally, every user or application should be limited to the exact permissions required.

In theory, this process should be straightforward. The first step is to understand which permissions a given user or application has been assigned. Next, an inventory of those permissions actually being used should be conducted. Comparing the two reveals the permission gap, namely which permissions should be retained and which should be modified or removed.

This can be accomplished in several ways. The permissions deemed excessive can be removed or monitored and alerted on. By continually re-examining the environment and removing unused permissions, an organization can achieve least privilege in the cloud over time.

However, the effort required to determine the precise permissions necessary for each application in a complex cloud environment can be both labor intensive and prohibitively expensive.

Understand native IAM controls

Let’s look at AWS, since it is the most popular cloud platform and offers one of the most granular Identity and Access Management (IAM) systems available. AWS IAM is a powerful tool that allows administrators to securely configure access to AWS cloud resources. With over 2,500 permissions (and counting), IAM gives users fine-grained control over which actions can be performed on a given resource in AWS.

Not surprisingly, this degree of control introduces an equal (some might say greater) level of complexity for developers and DevOps teams.

In AWS, roles are used as machine identities. To grant an application-specific permission requires attaching access policies to the relevant role. These can be managed policies, created by the cloud service provider (CSP), or inline policies, created by the AWS customer.

Reign in roles

Roles, which can be assigned more than one access policy or serve more than one application, make the journey to least-privilege more challenging.

Here are several scenarios that illustrate this point.

1. Single application – single role: where an application uses a role with different managed and inline policies, granting privileges to access Amazon ElastiCache, RDS, DynamoDB, and S3 services. How do we know which permissions are actually being used? And once we do, how do we right-size the role? Do we replace managed policies with inline ones? Do we edit existing inline policies? Do we create new policies of our own?

2. Two applications – single role: where two different applications share the same role. Let’s assume that this role has access permissions to Amazon ElastiCache, RDS, DynamoDB and S3 services. But while the first application is using RDS and ElastiCache services, the second is using ElastiCache, DynamoDB, and S3. Therefore, to achieve least-privilege the correct action would be role splitting, and not simply role right-sizing. In this case, role-splitting would be followed by role right-sizing, as a second step.

3. Role chaining occurs when an application uses a role that does not have any sensitive permissions, but this role has the permission to assume a different, more privileged role. If the more privileged role has permission to access a variety of services like Amazon ElastiCache, RDS, DynamoDB, and S3, how do we know which services are actually being used by the original application? And how do we restrict the application’s permissions without disrupting other applications that might also be using the second, more privileged role?

One native AWS tool called Access Advisor allows administrators to investigate the list of services accessed by a given role and verify how it is being used. However, relying solely on Access Advisor does not connect the dots between access permissions and individual resources required to address many policy decisions. To do that, it’s necessary to dig deep into the CloudTrail logs, as well as the compute management infrastructure.

Least privilege in the cloud

Finally, keep in mind that we have only touched on native AWS IAM access controls. There are several additional issues to be considered when mapping access permissions to resources, including indirect access (via secrets stored in Key Management Systems and Secret Stores), or application-level access. That is a discussion for another day.

As we’ve seen, enforcing least privilege in the cloud to minimize access risks that lead to data breaches or service interruption can be manually unfeasible for many organizations. New technologies are emerging to bridge this governance gap by using software to automate the monitoring, assessment and right sizing of access permissions across all identities – users, devices, applications, etc. – in order to eliminate risk.

Amazon AppFlow automates bidirectional data flows between AWS and SaaS apps

Amazon AppFlow is a fully managed service that provides an easy, secure way for customers to create and automate bidirectional data flows between AWS and SaaS applications without writing custom integration code.

Amazon AppFlow

Amazon AppFlow also works with AWS PrivateLink to route data flows through the AWS network instead of over the public Internet to provide even stronger data privacy and security.

There are no upfront charges or fees to use Amazon AppFlow, and customers only pay for the number of flows they run and the volume of data processed.

Millions of customers run applications, data lakes, large-scale analytics, machine learning, and IoT workloads on AWS. These customers often also have data stored in dozens of SaaS applications, resulting in silos that are disconnected from data stored in AWS.

Organizations want to be able to combine their data from all of these sources, but that requires customers to spend days writing code to build custom connectors and data transformations to convert disparate data types and formats across different SaaS applications.

Customers with multiple SaaS applications end up with a sprawl of connectors and complex code that is time-consuming and expensive to maintain. Further, custom connectors are often difficult to scale for large volumes of data or near real-time transfer, causing delays between when data is available in SaaS and when other systems access the data.

In large enterprises, business users wait months for skilled developers to build custom connectors. In firms with limited in-house developer skills, users resort to manually uploading and downloading data between systems, which is tedious, error-prone and risks data leakage.

Amazon AppFlow solves these problems, and allows customers with diverse technical skills, including CRM administrators and BI specialists, to easily configure private, bidirectional data flows between AWS services and SaaS applications without writing code or performing data transformation.

Customers can get started using Amazon AppFlow’s simple interface to build and execute data flows between sources in minutes, and Amazon AppFlow securely orchestrates and executes the data transfer.

With just a few clicks in the Amazon AppFlow console, customers can configure multiple types of triggers for their data flows, including one-time on-demand transfers, routine data syncs scheduled at pre-determined times, or event-driven transfers when launching a campaign (e.g. converting a lead, closing an opportunity, or opening a case).

For example, customers can backup millions of contacts and support cases from Salesforce to Amazon Simple Storage Service (Amazon S3), add sales opportunities from Salesforce to forecasts in Amazon Redshift, and transfer marketing leads from Amazon S3 to Salesforce after using Amazon SageMaker to add lead scores.

Customers can also pull logs and metric data from monitoring tools like Datadog or Dynatrace for deep analytics in Amazon Redshift, or send customer engagement data from Slack, Marketo, Zendesk, Amplitude, or Singular to Amazon S3 for sentiment analysis.

Customers can transform and process the data by combining fields (to calculate new values), filtering records (to reduce noise), masking sensitive data (to ensure privacy), and validating field values (to cleanse the data).

Amazon AppFlow automatically encrypts data at rest and in motion using AWS or customer-managed encryption keys, and enables users to restrict data from flowing over the public Internet for applications that are integrated with AWS PrivateLink, reducing exposure to security threats.

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS.

“Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

Amazon AppFlow availability

Amazon AppFlow is available today in US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Toyko), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Mumbai), Europe (Paris), Europe (Ireland), Europe (Frankfurt), Europe (London), and South America (São Paulo) with more regions to come.

Customer comments

“As the world’s largest employee benefits provider, Unum leverages a tremendous amount of structured and unstructured data to ensure a great customer experience,” Balaji Apparsamy, VP Data and Analytics, Unum Group. “Amazon AppFlow helps our data analytics team to simplify configuration allowing us to accelerate data-driven integrations and build data science applications at a much faster pace, which ultimately helps us enhance our customer satisfaction.”

“With Amazon AppFlow integrating directly with Salesforce Private Connect, joint customers will be able to establish a secure, private connection for passing data back and forth between the Salesforce and AWS platforms,” said Sarah Franklin, EVP & GM Platform, Trailhead & Developers, Salesforce. “And because these connections can be set up by Salesforce admins in just a few clicks, companies can cut down on costly and timely engineering resources, and begin doing more with their data faster than ever before.”

Trend Micro is a global cybersecurity solutions provider that provides layered security for data centers, cloud environments, networks, and endpoints. “The integration using Amazon AppFlow benefits our customers by reducing friction when distributing data from their Trend Micro Cloud One account to AWS services,” said Sanjay Mehta, SVP, Business Development & Alliances, Trend Micro. “This no-code capability enables continuous audit automation and gives our customers’ security and development teams a seamless and secure way to deliver data related to security agents.”

Multi-cloud key management and BYOK

Cloud providers such as Google Cloud Platform, AWS, and Microsoft Azure work hard to be the service provider of choice for enterprise customers. They often push the envelope with specialized features and capabilities unique to each platform. These features can often add real value for certain industries and applications and help to differentiate the platforms from each other.

BYOK

At the same time, the reliance on unique services across the various public clouds creates a barrier that inhibits enterprise customers from easily switching from one cloud provider to another or managing applications efficiently across a multi-cloud environment.

In addition, all the public cloud vendors have their own solution for encryption key management, which can be extended to specific applications for enhanced data protection. While this establishes a high degree of security, organizations lose control over the keys and give up the ability to easily migrate to different cloud platforms.

Many organizations start off with the intention of sticking to a preferred cloud provider. But over time, they may need to host certain applications or access certain services that are only available on certain clouds. When that happens, they invariably migrate to a multi-cloud environment. For smaller organizations, it may be possible to stay with a single provider, but as organizations grow, they have to consider going multi-cloud. And from a redundancy standpoint, having the ability to move from one cloud to another in case something happens is very attractive to larger organizations. Additionally, organizations may have an audit requirement involving backup or redundancy capabilities and simply can’t be sole source on a single vendor.

Furthermore, if the cloud provider directly manages an organization’s cryptographic keys, local employees could access the organization’s sensitive data if proper oversight and controls are not in place. Also, if the cloud provider is issued a legal order, they are left with no choice but to comply and hand over the organization’s keys.

Use your own keys

To address these challenges, cloud providers have introduced support for Bring Your Own Key (BYOK) that allows organizations to encrypt data inside cloud services with their own keys while still continuing to leverage the cloud provider’s native encryption services to protect their data.

Even with BYOK, keys still exist in the cloud providers’ key management service. But because keys are now generated, escrowed, rotated, and retired in an on-premises hardware security module (HSM), BYOK helps organizations to more fully address compliance and reporting requirements. Another benefit is that companies can ensure cryptographic keys are generated using a sufficient source of entropy and are protected from disclosure.

While BYOK offers increased control, it also comes with additional key management responsibilities that are magnified in multi-cloud environments. Every cloud provider has its own set of APIs and its own cryptographic methods for transporting keys. With AWS, you import keys through the AWS Management Console, a command-line interface, and with APIs through the TLS protocol. Microsoft has the Azure Storage Service Encryption for data at rest along with the Azure Storage Client Library, and keys must be stored in Azure Key Vault. Google Cloud Platform meanwhile has its own set of tools for managing keys for services such as Google Cloud Storage or Google Compute Engine.

Fundamentally, the processes, procedures and methods for managing keys are completely different across clouds, and not just from an API standpoint, but from architecture and process standpoints with each requiring different key management techniques. Needless to say, all this complexity and variability is the enemy of efficient operations and any missteps can put critical data at risk.

The irony is that at the end of the day, you’re trying to accomplish the same thing, namely encrypt application data in the cloud using keys. That’s also the good news. Because you have a singular goal of key management, many organizations are turning to centralized key management to manage the full lifecycle of cloud keys.

BYOK scenario

In the BYOK scenario, centralizing key management can offer significant advantages by allowing organizations to consolidate policies and procedures, develop consistent, repeatable, and well-documented practices, and – most importantly – reduce the risks of exposing keys.

As mentioned above, even with BYOK, organizations still have to leave a copy of their cryptographic keys with the cloud provider. To address this problem, cloud providers are starting to develop interfaces to allow their customers to fully utilize external key management systems. Not only will this give organizations complete control of their keys, but it points toward centralization as the accepted best practice for managing encryption across multiple cloud environments.

Based on the broad trend toward multi-cloud and the challenge of key management in a multi-cloud world, it’s safe to assume that other cloud providers will be adding improved for support for external key management. This will make it increasingly easier to simplify key management functions across multiple clouds while allowing you to retain full control over your data and encryption keys.

Ping Identity PingID multi-factor authentication now available in AWS Marketplace

Ping Identity, the Intelligent Identity solution for the enterprise, announced the availability of PingID multi-factor authentication (MFA) in AWS Marketplace. Customers can now quickly procure and deploy PingID to secure work from home while adding an additional layer of security to their AWS infrastructure.

Ping’s Intelligent IdentityTM platform provides enterprises a digital identity solution for securely accessing services, applications, and APIs from virtually any device or location.

The PingID MFA service makes it easy for enterprises to offer strongly authenticated access to applications running nearly anywhere, in the cloud, on-premises or across hybrid IT environments.

Ping Identity is an Advanced Technology Partner in the AWS Partner Network (APN) and also achieved AWS Security Competency status. PingID complements existing AWS services to allow customers to provide a secure and seamless experience across their cloud and on-premises environments.

“Ping Identity is committed to working with AWS to address the security needs of today’s enterprises as they continue their digital transformation initiatives and migrate to the cloud,” said Loren Russon, vice president, product management and technology alliances, Ping Identity.

“Adding PingID to AWS Marketplace is another important step in helping our global customers quickly increase security at scale and enable secure work from home solutions.”