McAfee MVISION CNAPP enhances cloud-native security by integrating with AWS

McAfee announced the MVISION Cloud Native Application Protection Platform (CNAPP) with several native Amazon Web Services (AWS) integrations to help customers more easily secure their applications and data in their Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) environments.

Architected to support multiple AWS services, MVISION CNAPP helps customers continuously identify and fix misconfigurations and software vulnerabilities in their AWS environment and securely accelerate their deployment of cloud-native applications.

Announced last month, MVISION CNAPP is a new McAfee security service that combines Cloud Security Posture Management (CSPM), Cloud Workload Protection Platform (CWPP), and application and data security into one solution.

The unified solution provides security teams deep insight into service configurations for AWS, industry benchmarks to better assess their data and application security risk, as well as integrated workload protection tools to improve security across their entire application lifecycle.

CNAPP integrates with several AWS deployment services such as AWS Systems Manager and AWS PrivateLink to make deployment easier and more secure, as well as security services like AWS Security Hub with broader workload and data context for enhanced security.

“AWS Security Hub is a great example of a security service built specifically for AWS customers,” said Anand Ramanathan, vice president of product management, McAfee.

“We’ve collaborated with AWS to add hybrid security use cases and broader workload and data context to enhance the value of this service, as well as to leverage AWS-native deployment services allowing customers to simply add our CNAPP capabilities to deployment pipelines already in use thus seamlessly enhancing the security of their cloud-native applications.”

MVISION CNAPP is available in AWS Marketplace providing customers a streamlined method for purchasing the new service as well as providing consolidated billing for consumption.

What’s more, MVISION CNAPP has purpose-built security audit policies for AWS container services Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and AWS Fargate.

“In today’s digital enterprise, security is a critical priority across the organization,” said Dan Plastina, Vice President, Security Services, Amazon Web Services, Inc. “We are delighted to be working with McAfee to facilitate collaboration across developer and security teams so that customers can more effectively secure their workloads in the cloud.”

“EA’s business depends on the public cloud, and it’s my role to manage the security of that environment,” said Bob Fish, Enterprise Security Architect at Electronic Arts.

“MVISION CNAPP integrates with AWS deployment services such as AWS Systems Manager and AWS PrivateLink and also integrates with AWS security services like AWS Security Hub, enhancing AWS native security capabilities.

“We prefer a single unified security platform over implementing separate point products for each security capability required. The unified approach of MVISION CNAPP allows us to use fewer people to manage security risk across all our AWS resources.”

AWS Network Firewall: Network protection across all AWS workloads

Amazon Web Services announced the general availability of AWS Network Firewall, a new managed security service that makes it easier for customers to enable network protections across all of their AWS workloads. Customers can enable AWS Network Firewall in their desired Amazon Virtual Private Cloud (VPC) environments with just a few clicks in the AWS Console, and the service automatically scales with network traffic to provide high availability protections without the need to set up … More

The post AWS Network Firewall: Network protection across all AWS workloads appeared first on Help Net Security.

AWS Glue DataBrew: Enabling customers to clean and normalize data without writing code

Amazon Web Services announced the general availability of AWS Glue DataBrew, a new visual data preparation tool that enables customers to clean and normalize data without writing code.

Since 2016, data engineers have used AWS Glue to create, run, and monitor extract, transform, and load (ETL) jobs. AWS Glue provides both code-based and visual interfaces, and has dramatically simplified extracting, orchestrating, and loading data in the cloud for customers.

Data analysts and data scientists have wanted an easier way to clean and transform this data, and that’s what DataBrew delivers, with a service that allows data exploration and experimentation directly from AWS data lakes, data warehouses, and databases without writing code.

AWS Glue DataBrew offers customers over 250 pre-built transformations to automate data preparation tasks (e.g. filtering anomalies, standardizing formats, and correcting invalid values) that would otherwise require days or weeks writing hand-coded transformations.

Once the data is prepared, customers can immediately start using it with AWS and third-party analytics and machine learning services to query the data and train machine learning models. There are no upfront commitments or costs to use AWS Glue DataBrew, and customers only pay for creating and running transformations on datasets.

Preparing data for analytics and machine learning involves several necessary and time-consuming tasks, including data extraction, cleaning, normalization, loading, and the orchestration of ETL workflows at scale.

For extracting, orchestrating, and loading data at scale, data engineers and ETL developers skilled in SQL or programming languages like Python or Scala can use AWS Glue.

ETL developers often prefer the visual interfaces common in modern ETL tools over writing SQL, Python, or Scala, so AWS recently introduced AWS Glue Studio, a new visual interface to help author, run, and monitor ETL jobs without having to write any code.

Once the data has been reliably moved, the underlying data still needs to be cleaned and normalized by data analysts and data scientists that operate in the lines of business and understand the context of the data.

To clean and normalize the data, data analysts and data scientists have to either work with small batches of the data in Excel or Jupyter Notebooks, which cannot accommodate large data sets, or rely on scarce data engineers and ETL developers to write custom code to perform cleaning and normalization transformations.

In an effort to spot anomalies in the data, highly skilled data engineers and ETL developers spend days or weeks writing custom workflows to pull data from different sources, then pivot, transpose, and slice the data multiple times, before they can iterate with data analysts or data scientists to identify and fix data quality issues.

After they have developed these transformations, data engineers and ETL developers still need to schedule the custom workflows to run on an ongoing basis, so new incoming data can automatically be cleaned and normalized.

Each time a data analyst or data scientist wants to change or add a transformation, the data engineers and ETL developers need to extract, load, clean, normalize, and orchestrate the data preparation tasks over again.

This iterative process can take several weeks to months to complete; and as a result, customers spend as much as 80% of their time cleaning and normalizing data instead of actually analyzing the data and extracting value from it.

AWS Glue DataBrew is a visual data preparation tool for AWS Glue that allows data analysts and data scientists to clean and transform data with an interactive, point-and-click visual interface, without writing any code.

With AWS Glue DataBrew end users can easily access and visually explore any amount of data across their organization directly from their Amazon Simple Storage Service (S3) data lake, Amazon Redshift data warehouse, and Amazon Aurora and Amazon Relational Database Service (RDS) databases.

Customers can choose from over 250 built-in functions to combine, pivot, and transpose the data without writing code. AWS Glue DataBrew recommends data cleaning and normalization steps like filtering anomalies, normalizing data to standard date and time values, generating aggregates for analyses, and correcting invalid, misclassified, or duplicative data.

For complex tasks like converting words to a common base or root word (e.g. converting “yearly” and “yearlong” to “year”), AWS Glue DataBrew also provides transformations that use advanced machine learning techniques like Natural Language Processing (NLP).

Users can then save these cleaning and normalization steps into a workflow (called a recipe) and apply them automatically to future incoming data. If changes need to be made to the workflow, data analysts and data scientists simply update the cleaning and normalization steps in the recipe, and they are automatically applied to new data as it arrives.

AWS Glue DataBrew publishes the prepared data to Amazon S3, which makes it easy for customers to immediately use it in analytics and machine learning. AWS Glue DataBrew is serverless and fully managed, so customers never need to configure, provision, or manage any compute resources.

“AWS customers are using data for analytics and machine learning at an unprecedented pace. However, these customers regularly tell us that their teams spend too much time on the undifferentiated, repetitive, and mundane tasks associated with data preparation,” said Raju Gulabani, VP of Database and Analytics, AWS.

“Customers love the scalability and flexibility of code-based data preparation services like AWS Glue, but they could also benefit from allowing business users, data analysts, and data scientists to visually explore and experiment with data independently, without writing code.

“AWS Glue DataBrew features an easy-to-use visual interface that helps data analysts and data scientists of all technical levels understand, combine, clean, and transform data.”

AWS Glue DataBrew is generally available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo), with availability in additional regions coming soon.

Tokyo-based NTT DOCOMO is the largest mobile service provider in Japan, serving more than 80 million customers. “Our analysts profile and query various kinds of structured and unstructured data in order to better understand usage patterns,” said Takashi Ito, General Manager of Marketing Platform Planning Department, NTT DOCOMO.

“AWS Glue DataBrew provides a visual interface that enables both our technical and non-technical users to analyze data quickly and easily. Its advanced data profiling capability helps us better understand our data and monitor the data quality. AWS Glue DataBrew and other AWS analytics services have allowed us to streamline our workflow and increase productivity.”

bp is one of the world’s largest integrated energy companies. “A data lake is a critical part of our analytics strategy. One of the challenges we face is not being able to easily explore data before ingestion into our data lake,” said John Maio, Director, Data & Analytics Platforms Architecture, bp.

“AWS Glue DataBrew has sophisticated data profiling functionality and a rich set of built-in transformations. This enables our data engineers to easily explore new datasets in a visual interface and make modifications in order to optimize ingestion and allow analysts to shape the data for their analytics solutions.

“We see AWS Glue DataBrew as a way to help us better manage our data platform and improve efficiencies in our data pipelines.”

INVISTA, a subsidiary of Koch Industries, is one of the world’s largest integrated producers of chemical intermediates, polymers, and fibers.

“Data is critical to optimizing our manufacturing processes. One of the challenges we face is ensuring we have a clean data lake that can serve as the source of truth for our analytics and machine learning applications,” said Tanner Gonzalez, Analytics and Cloud leader, INVISTA.

“The data ingested into our data lake often contains duplicate values, incorrect formatting and other imperfections that make it difficult to use in its raw form. Amazon AWS Glue DataBrew will allow our data analysts to visually inspect large data sets, clean and enrich data, and perform advanced transformations.

“AWS Glue DataBrew will empower our analysts and data scientists to perform advanced data engineering activities, giving them the freedom to explore their data and decreasing the time to derive new insights.”

Checkmarx brings software security solutions to AWS Marketplace, earns AWS DevOps Competency status

Checkmarx announced major milestones in its relationship with Amazon Web Services (AWS), bringing its software security solutions to AWS Marketplace and earning AWS DevOps Competency status.

With these moves, Checkmarx is delivering greater simplicity, flexibility, and confidence to customers looking to deploy application security testing (AST) solutions into their AWS CI/CD pipelines.

Checkmarx provides automated solutions that simplify and speed up the process of security testing in fast-paced DevOps environments. Checkmarx SAST, IAST, SCA, and Codebashing integrate seamlessly with developer workflows and tools to quickly find and remediate vulnerabilities in both custom and open source code before software is released into production.

Of note, Checkmarx’s availability in AWS Marketplace follows the company’s recent string of partnership activity with premier software development platforms including GitHub and GitLab.

The AWS DevOps Competency recognizes that Checkmarx provides a proven technology with deep expertise in helping organizations implement application security within continuous integration and delivery practices on AWS.

With this latest certification, Checkmarx becomes the only AST solutions provider to possess both the AWS Security and DevOps Competencies, underscoring its commitment to helping organizations move their DevOps initiatives to the cloud.

“Checkmarx empowers cloud-first organizations to enhance the security of the software they release while providing a seamless experience for developers,” said Robert Nilsson, VP of Product Management, Checkmarx.

“Bringing our solutions to AWS Marketplace, as well as achieving both the AWS Security and DevOps Competencies, demonstrate our dedication to the AWS community and our customers in helping them strategically and securely navigate their cloud and digital transformation journeys.”

AWS enables scalable, flexible, and cost-effective solutions for banking and payments, capital markets, and insurance organizations, from startups to global enterprises.

To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify AWS Consulting and Technology Partners with deep industry experience and expertise.

AWS launches Amazon EC2 P4d instances, boosting performance for ML training and HPC

Amazon Web Services announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P4d instances, the next generation of GPU-powered instances delivering 3x faster performance, up to 60% lower cost, and 2.5x more GPU memory for machine learning training and high-performance computing (HPC) workloads when compared to previous generation P3 instances.

P4d instances feature eight NVIDIA A100 Tensor Core GPUs and 400 Gbps of network bandwidth (16x more than P3 instances). Using P4d instances with AWS’s Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access), customers are able to create P4d instances with EC2 UltraClusters capability.

With EC2 UltraClusters, customers can scale P4d instances to over 4,000 A100 GPUs (2x as many as any other cloud provider) by making use of AWS-designed non-blocking petabit-scale networking infrastructure integrated with Amazon FSx for Lustre high performance storage, offering on-demand access to supercomputing-class performance to accelerate machine learning training and HPC.

Data scientists and engineers are continuing to push the boundaries of machine learning by creating larger and more-complex models that provide higher prediction accuracy for a broad range of use cases, including perception model training for autonomous vehicles, natural language processing, image classification, object detection, and predictive analytics.

Training these complex models against large volumes of data is a very compute, network, and storage intensive task and often takes days or weeks. Customers not only want to cut down on the time-to-train their models, but they also want to lower their overall spend on training.

Collectively, long training times and high costs limit how frequently customers can train their models, which translates into a slower pace of development and innovation for machine learning.

The increased performance of P4d instances speeds up the time to train machine learning models by up to 3x (reducing training time from days to hours) and the additional GPU memory helps customers train larger, more complex models.

As data becomes more abundant, customers are training models with millions and sometimes billions of parameters, like those used for natural language processing for document summarization and question answering, object detection and classification for autonomous vehicles, image classification for large-scale content moderation, recommendation engines for e-commerce websites, and ranking algorithms for intelligent search engines—all of which require increasing network throughput and GPU memory.

P4d instances feature 8 NVIDIA A100 Tensor Core GPUs capable of up to 2.5 petaflops of mixed-precision performance and 320 GB of high bandwidth GPU memory in one EC2 instance.

P4d instances are the first in the industry to offer 400 Gbps network bandwidth with Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA network interfaces to enable direct communication between GPUs across servers for lower latency and higher scaling efficiency, helping to unblock scaling bottlenecks across multi-node distributed workloads.

Each P4d instance also offers 96 Intel Xeon Scalable (Cascade Lake) vCPUs, 1.1 TB of system memory, and 8 TB of local NVMe storage to reduce single node training times.

By more than doubling the performance of previous generation of P3 instances, P4d instances can lower the cost to train machine learning models by up to 60%, providing customers greater efficiency over expensive and inflexible on-premises systems.

HPC customers will also benefit from P4d’s increased processing performance and GPU memory for demanding workloads like seismic analysis, drug discovery, DNA sequencing, materials science, and financial and insurance risk modeling.

P4d instances are also built on the AWS Nitro System, AWS-designed hardware and software that has enabled AWS to deliver an ever-broadening selection of EC2 instances and configurations to customers, while offering performance that is indistinguishable from bare metal, providing fast storage and networking, and ensuring more secure multi-tenancy.

P4d instances offload networking functions to dedicated Nitro Cards that accelerate data transfer between multiple P4d instances. Nitro Cards also enable EFA and GPUDirect, which allows for direct cross-server communication between GPUs, facilitating lower latency and better scaling performance across EC2 UltraClusters of P4d instances.

These Nitro-powered capabilities make it possible for customers to launch P4d in EC2 UltraClusters with on-demand and scalable access to over 4,000 GPUs for supercomputer-class performance.

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower cost way to train their massive machine learning models,” said Dave Brown, Vice President, EC2, AWS.

“Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

Customers can run containerized applications on P4d instances with AWS Deep Learning Containers with libraries for Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS).

For a more fully managed experience, customers can use P4d instances via Amazon SageMaker, providing developers and data scientists with the ability to build, train, and deploy machine learning models quickly.

HPC customers can leverage AWS Batch and AWS ParallelCluster with P4d instances to help orchestrate jobs and clusters efficiently. P4d instances support all major machine learning frameworks, including TensorFlow, PyTorch, and Apache MXNet, giving customers the flexibility to choose the framework that works best for their applications.

P4d instances are available in US East (N. Virginia) and US West (Oregon), with availability planned for additional regions soon. P4d instances can be purchased as On-Demand, with Savings Plans, with Reserved Instances, or as Spot Instances.

GE Healthcare is the $16.7 billion healthcare business of GE. As a leading global medical technology and digital solutions innovator, GE Healthcare enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications and services, supported by its Edison intelligence platform.

“At GE Healthcare, we provide clinicians with tools that help them aggregate data, apply AI and analytics to that data and uncover insights that improve patient outcomes, drive efficiency and eliminate errors,” said Karley Yoder, VP & GM, Artificial Intelligence, at GE Healthcare.

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results. Using the new P4d instances reduced processing time from days to hours.

“We saw two- to three-times greater speed on training models with various image sizes, while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

Toyota Research Institute (TRI), founded in 2015, is working to develop automated driving, robotics, and other human amplification technology for Toyota. “At TRI, we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Aon is a leading global professional services firm providing a broad range of risk, retirement and health solutions. Aon PathWise is a GPU-based and scalable HPC risk management solution that insurers and re-insurers, banks, and pension funds can use to address today’s key challenges such as hedge strategy testing, regulatory and economic forecasting, and budgeting.

“Aon PathWise allows (re)insurers and pension funds to access next generation technology to rapidly solve today’s key insurance challenges such as hedge strategy testing, regulatory and economic forecasting, and budgeting,” said Peter Phillips, President and CEO, PathWise.

“Through the use of AWS P4d instances with 2.5 petaflops of mixed-precision performance, we are able to deliver a two-fold reduction in cost to our customers without loss of performance, and can deliver a 2.5x improvement in speed for the most demanding calculations. Speed matters and we continue to delight our customers thanks to the new instances from AWS.”

Comprised of radiology and AI experts, Rad AI builds products that maximize radiologist productivity, ultimately making healthcare more widely accessible and improving patient outcomes.

“At Rad AI, our mission is to increase access to and quality of healthcare, for everyone. With a focus on medical imaging workflow, Rad AI saves radiologists time, reduces burnout, and enhances accuracy,” said Doktor Gurson, Co-founder of Rad AI.

“We use AI to automate radiology workflows and help streamline radiology reporting. With the new EC2 P4d instances, we’ve seen faster inference and the ability to train models 2.4x faster, with higher accuracy than on previous generation P3 instances. This allows faster, more accurate diagnosis, and greater access to high quality radiology services provided by our customers across the US.”

OmniSci is a pioneer in accelerated analytics. The OmniSci platform is used in business and government to find insights in data beyond the limits of mainstream analytics tools. “At OmniSci, we’re working to build a future where data science and analytics converge to break down and fuse data silos.

“Customers are leveraging their massive amounts of data that may include location and time to build a full picture of not only what is happening, but when and where through granular visualization of spatial temporal data. Our technology enables seeing both the forest and the trees,” said Ray Falcione, VP of US Public Sector, at OmniSci.

“Through the use of P4d instances, we were able reduce the cost to deploy our platform significantly compared to previous generation GPU instances thus enabling us to cost-effectively scale massive data sets. The networking improvements on A100 has increased our efficiencies in how we scale to billions of rows of data and enabled our customers to glean insights even faster.”

Zenotech Ltd is redefining engineering online through the use of HPC Clouds delivering on demand licensing models together with extreme performance benefits by leveraging GPUs.

“At Zenotech we are developing the tools to enable designers to create more efficient and environmentally friendly products. We work across industries and our tools provide greater product performance insight through the use of large scale simulation,” said Jamil Appa, Director and Co-Founder, Zenotech.

“The use of P4d instances enables us to reduce our simulation runtime by 65% compared to the previous generation of GPUs. This speed up cuts our time to solve significantly allowing our customers to get designs to market faster or to do higher fidelity simulations than were previously possible.”

Wipro AWS Business Group: Fast-tracking customers’ cloud transformation journey on AWS

Wipro announced the launch of its dedicated Wipro AWS Business Group (WABG), a unit designed to help customers fast-track their cloud transformation journey on AWS.

WABG merges Wipro’s diverse industry experience and comprehensive portfolio of services with AWS’s industry-leading cloud platforms to help organizations worldwide drive business acceleration, enhance customer experience, and leverage connected insights.

This strategic move reflects the commitment of both Wipro and AWS to foster the success of their shared business as well as their passion to continually innovate for enterprises.

This launch was inspired by recent collaborations between Wipro and AWS, including the implementation of cloud solutions for Wabtec, a leading supplier of critical components, locomotives, services, signaling, and logistics systems and services for the global rail industry.

“Our close relationship with Wipro and AWS has allowed us to leverage cloud to drive continuous innovation that is pertinent to our organization,” said Richard Smith, Chief Information Officer, Wabtec. “I congratulate both Wipro and AWS on this milestone in their relationship,” he added.

The Wipro AWS Business Group will house more than 10,000 AWS-certified consultants, along with specialized teams focusing on business development, talent creation, solution development, and delivery execution.

Bhanumurthy B.M, President and Chief Operating Officer, Wipro Limited said, “We are seeing a rapid growth of our AWS business and are now ready to take this collaboration to the next level. Wipro has continuously demonstrated its deep cloud expertise and ‘business first’ thinking in enabling enterprises to become nimble and future-ready.

“Wipro AWS Business Group, with the combined strengths of Wipro and AWS under one umbrella, will help reinforce our capabilities and support our clients in their innovation-led cloud journey from strategy to execution.”

Wipro and AWS continue to invest in building innovative solutions across industries and key technology areas like migration, modernization, data freedom, and Artificial Intelligence/Machine Learning (AI/ML) to help clients reimagine their cloud journey.

Additionally, customers can leverage the Wipro AWS Launch Pad to facilitate their business transformation programs.

“By establishing the Wipro AWS Business Group, we are taking an important step forward in the long-standing relationship between the two companies,” said Matt Garman, Vice President, Sales and Marketing, Amazon Web Services, Inc.

“This business group will empower clients to rapidly achieve the benefits of moving to AWS, eliminate the undifferentiated heavy lifting of managing their IT infrastructure, and instead focus on their core business,” he added.

New infosec products of the week: October 30, 2020

Confluera 2.0: Enhanced autonomous detection and response capabilities to protect cloud infrastructure

Confluera XDR delivers a purpose-built cloud workload detection and response solution with the unique ability to deterministically track threats progressing through the environment. Confluera holistically integrates security signals from the environment to provide a complete attack narrative of a cyberattack in real-time, as opposed to showing isolated alerts.

infosec products October 2020

Aqua Security unveils Kubernetes-native security capabilities

Aqua Security’s new Kubernetes security solution addresses the complexity and short supply of engineering expertise required to configure Kubernetes infrastructure effectively and automatically, by introducing KSPM – Kubernetes Security Posture Management – a coherent set of policies and controls to automate secure configuration and compliance.

infosec products October 2020

AWS Nitro Enclaves: Create isolated environments to protect highly sensitive workloads

AWS Nitro Enclaves helps customers reduce the attack surface for their applications by providing a trusted, highly isolated, and hardened environment for data processing. Each Enclave is a virtual machine created using the same Nitro Hypervisor technology that provides CPU and memory isolation for Amazon EC2 instances, but with no persistent storage, no administrator or operator access, and no external networking.

infosec products October 2020

GrammaTech CodeSentry: Identifying security blind spots in third party code

GrammaTech announced CodeSentry, which performs binary software composition analysis to inventory third party code used in custom developed applications and detect vulnerabilities they may contain. CodeSentry identifies blind spots and allows security professionals to measure and manage risk quickly and easily throughout the software lifecycle.

infosec products October 2020

Protegrity Data Protection Platform enhancements help secure sensitive data across cloud environments

Built for hybrid-cloud and multi-cloud serverless computing, Protegrity’s latest platform enhancements allow companies to deploy and update customized policies across geographies, departments, and digital transformation programs. Protegrity enables businesses to turn sensitive data into intelligence-driven insights to monetize data responsibly, and support vital AI and ML initiatives.

infosec products October 2020

AWS Nitro Enclaves: Create isolated environments to protect highly sensitive workloads

Amazon Web Services announced the general availability of AWS Nitro Enclaves, a new Amazon EC2 capability that makes it easier for customers to securely process highly sensitive data.

AWS Nitro Enclaves

AWS Nitro Enclaves helps customers reduce the attack surface for their applications by providing a trusted, highly isolated, and hardened environment for data processing.

Each Enclave is a virtual machine created using the same Nitro Hypervisor technology that provides CPU and memory isolation for Amazon EC2 instances, but with no persistent storage, no administrator or operator access, and no external networking.

This isolation means that applications running in an Enclave remain inaccessible to other users and systems, even to users within the customer’s organization. With this isolation, the AWS Nitro Enclave owner can start and stop, or assign resources to an Enclave, but even the owner cannot see what is being processed inside of AWS Nitro Enclaves.

AWS also announced the launch of AWS Certificate Manager (ACM) for Nitro Enclaves, a new Enclave application that makes it easy for customers to protect and manage Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for their webservers running on Amazon EC2.

Many customers across all industries have asked for help to further protect their highly sensitive data like personally identifiable information, financial data, healthcare records, intellectual property, and more – including from internal users within their own accounts.

Today, customers can protect their data with access controls and by using encryption while it is at rest and in transit, but encryption does not protect data when it is unencrypted at the point of use (e.g. a healthcare recommendations algorithm must have access to unencrypted patient data).

One solution is to remove much of the functionality that an instance provides for general-purpose computing (e.g. networking, the ability to log into an instance, the capability to store and retrieve data, etc.), but doing so renders the rest of the instance less useful.

To protect unencrypted data during processing, customers often set up separate instance clusters for secure data configured with limited connectivity, restricted user access, and other strict isolations.

However, the possibility of human error in the setup and administration of such complex custom systems can lead to availability issues or security oversights, and managing these extra instances is an operational burden, an organizational bottleneck, and expensive.

With AWS Nitro Enclaves, customers simply select an instance type and decide how much CPU and memory they want to designate to the Enclave. AWS Nitro Enclaves provides the flexibility to partition varying combinations of CPU cores and memory, enabling customers to match resources to the size and performance demands of their workloads.

Customers can develop Enclave applications using the open source AWS Nitro Enclaves SDK set of libraries. The AWS Nitro Enclaves SDK also integrates with AWS Key Management Service (KMS), allowing customers to generate data keys and to decrypt them inside the Enclave.

With ACM for Nitro Enclaves, customers can easily isolate SSL/TLS certificates within an Enclave, making them usable by webservers on the instance while protecting them from access by other users or applications in the customer’s environment. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet or resources on private networks.

ACM for Nitro Enclaves ensures that sensitive data associated with these certificates never leaves the Enclave, while also managing the revocation and renewal of certificates to reduce the need for manual monitoring and webserver reconfigurations when a certificate expires.

“Customers often tell us that powerful built-in protections like the locked-down security model of the Nitro System are among the primary reasons why they trust AWS with their workloads,” said David Brown, Vice President, Amazon EC2, at AWS.

“Nitro Enclaves builds on those same security and isolation models that have separated AWS for so many customers, delivering a more efficient method for securely processing highly sensitive data. This means customers can build and innovate faster in a way that still meets the highest bar for security.”

AWS Nitro Enclaves is available on the majority of Intel and AMD-based Amazon EC2 instance types built on the AWS Nitro System (AWS Graviton2-based instance support is coming in the first half of 2021).

AWS Nitro Enclaves is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (Sao Paulo) regions, with more regions coming soon.

Anjuna provides simple, secure, enterprise-ready application and data protection against malicious software, IT insiders, and bad actors. “Our customers come to Anjuna because they want a simple way to get their applications up and running in a secure, isolated compute environment,” said Ayal Yogev, CEO and Co-Founder of Anjuna Security.

“The nature of our business has given us insight into different approaches for achieving this type of isolation. Our hands-on work with Nitro Enclaves confirms that this is a powerful solution for enterprises looking to process sensitive data in a way that protects this data from insider threats.

“Nitro Enclaves is exactly the type of innovation that has security-minded organizations looking to the cloud, and that’s why Anjuna is supporting the service to help AWS customers quickly ‘lift and shift’ applications to Enclaves – without recoding or changing processes.”

castLabs pioneers software and cloud services for digital video markets worldwide. “As a globally operating cloud service provider handling our clients’ most valuable data and encryption keys, we’re striving to achieve the highest levels of data security, isolation, and trust,” said Michael Stattmann, CEO and Founder, castLabs.

“Working with an advanced security technology usually increases overhead, but with Nitro Enclaves, achieving a confidential computing implementation is easy to develop and deploy, using much more familiar technologies.”

Evervault provides simple SDKs for developers to encrypt sensitive data as it enters their infrastructure, and to process that data without ever exposing it. “Our mission is to encrypt the internet,” said Shane Curran, CEO, Evervault. “Nitro Enclaves provides the perfect platform to make this happen, because it’s the best way to protect data in use.”

WekaIO WekaFS: Unified storage solutions with cloud-native ecosystem partners

WekaIO announced a transformative cloud-native storage solution underpinned by the world’s fastest file system, WekaFS, that unifies and simplifies the data pipeline for performance-intensive workloads and accelerated DataOps.

Weka has developed reference architectures (RAs) with leading object storage technology providers, like Amazon Web Services (AWS), Cloudian, IBM, Seagate, Quantum, Scality, and others in Weka’s Technology Alliance Program, to deliver cost-efficient, cloud-native data storage solutions at any scale.

And Weka’s OEM partnership with Hitachi Vantara will deliver an integrated end-to-end stack solution based on the Hitachi Content Platform.

WekaFS provides the ease of managing petabytes of data in a single, unified namespace wherever in the pipeline the data is stored, while also delivering the best performance to accelerate artificial intelligence/machine learning (AI/ML), genomics research, high-performance computing (HPC), and high-performance data analytics (HPDA) workflows.

Weka’s unified storage solutions with cloud-native ecosystem partners provide the following customer benefits:

  • Faster actionable business intelligence from a single high-performance storage solution
  • Cost-efficiency with the ability to manage, scale, and share data sets
  • Operational agility eliminating storage silos across edge, core, and cloud
  • Enterprise robustness and secure data governance

Manage more petabytes of data cost-effectively and with fewer resources

Extending the WekaFS namespace from high-performance flash to an Amazon Simple Storage Service (S3) REST-enabled cloud object storage system is a simpler and more cost-efficient strategy for managing petascale datasets without compromising performance.

The filesystem metadata resides on flash while seamlessly extending capacity over object storage, private or public. All the I/Os are serviced by the flash tier while leveraging the object tier for capacity scaling.

WekaFS allows data portability across multiple consumption models supporting both private and public clouds with the ability to extend the namespace across both. A cloud-first model delivers the best storage efficiency and TCO across consumption models and data tiers.

Facilitating data protection, mobility, and DR

As data has become a strategic asset for businesses, lifecycle management is paramount. However, the datasets encountered in AI/ML, genomics, HPC, and HPDA have grown so big and agile that traditional backup and DR applications fall short, creating siloed namespaces and workflows that are lacking operational agility and data protection.

Data versioning is achieved using Weka’s instant and space-efficient snapshots capability for experiment reproducibility and explainability. The snap-to-object feature captures a point-in-time copy of the entire, unified (flash and object store) file namespace that can be presented as another file namespace instance in a private or public cloud.

Weka’s integrated snapshots and end-to-end encryption features ensure data is always backed up and secure throughout its lifecycle. WekaFS also provides immutability and data mobility for these datasets with instant recovery.

Weka has partnered with leading private and public cloud partners to ensure a fully validated and performant storage solution ecosystem, including these certified solutions: AWS S3, AWS Outposts, Cloudian HyperStore, Hitachi Content Platform (HCP), IBM Cloud Object System (IBM COS), Quantum ActiveScale, and Scality RING.

Alcide integrates with AWS Security Hub to send alerts on risks to Kubernetes deployments

Alcide announced the company’s security solutions are now integrated with AWS Security Hub, sending real-time threat intelligence and compliance information to Amazon Web Services (AWS) for easy consumption by Security and DevSecOps teams. Alcide’s SaaS and container-based solutions for Kubernetes security are available in AWS Marketplace.

AWS Security Hub gives AWS customers a comprehensive view of security posture across all their AWS accounts. As a single place that aggregates, organizes, and prioritizes security information from multiple sources, AWS Security Hub helps identify security findings and remediate security threats. AWS Security Hub supports AWS-native applications and AWS Partner solutions, such as Alcide’s.

“In order to provide a comprehensive security posture assessment for each of our diverse customers, we recognize that AWS Security Hub must bring together a comprehensive set of industry-leading security AWS Partners,” said Dan Plastina, Vice President, Security Services, Amazon Web Services, Inc.

“Today, we’re pleased to add the Alcide Kubernetes Security Platform to the list of security integrations for AWS Security Hub.”

The Alcide Kubernetes Security Platform sends Kubernetes security alerts to AWS Security Hub, highlighting security events derived from Kubernetes audit logs. The Alcide kAudit module continuously monitors Kubernetes audit logs to detect known threats using pre-set rules, and detects unknown threats by applying Alcide’s unique ML-based anomaly engine.

The Alcide Platform also provides Kubernetes security best practices and compliance checks. It allows AWS customers to determine if their Kubernetes deployments are configured correctly and whether there is any security drift between developer, testing, and production.

Alcide Platform also supports threat intelligence, detecting malicious network activity such as crypto-mining, and more down to the pod level. Lastly, Alcide’s anomaly engine also detects advanced network attacks such as low-and-slow evolving network attacks and DNS tunneling.

“Integrating with the AWS Security Hub is an important strategic achievement for Alcide. Our Kubernetes Security Platform enables continuous audit and compliance for Kubernetes clusters, and integrating with AWS Security Hub will make our software even easier to deploy for DevOps teams using AWS,” said Amir Ofek, CEO of Alcide.

The rapid adoption of Kubernetes has left many companies struggling to find developers experienced with Kubernetes, and security has suffered as a result. In 2019, Alcide conducted an industry study with the Alcide Advisor by scanning over 5,000 Kubernetes deployments and found that 89% were not leveraging the Kubernetes secrets functionality, potentially exposing sensitive data to the internet and malicious actors.

Subsequently, the Alcide kAudit module was selected as one of the 10 hottest Kubernetes technologies in 2019 by CRN magazine for the threat intelligence it could extract from real-time monitoring of Kubernetes audit logs.

New infosec products of the week: October 2, 2020

Cohesity SiteContinuity: Protecting business-critical apps across a single platform

Cohesity SiteContinuity is an automated disaster recovery solution that is integrated with the company’s backup and continuous data protection capabilities — making it the only web-scale, converged solution to protect applications across tiers, service levels, and locations on a single platform.

infosec products October 2020

Stealthbits SbPAM 3.0: A modernized and simplified approach to PAM

SbPAM 3.0 continues Stealthbits’ commitment to renovate and simplify PAM. The company approaches PAM from the perspective of the abundance of privileged activities that need to be performed, not a group of privileged admins needing accounts.

infosec products October 2020

BullGuard 2021 security suite features multi-layered protection

The BullGuard 2021 security suite empowers consumers to confidently perform sensitive online transactions in absolute safety and rest assured knowing cyber threats are stopped dead in their tracks. BullGuard 2021 blocks malicious behavior before it can do damage, even when malware attempts to intentionally take a consumer’s device offline.

infosec products October 2020

Siemens Energy MDR defends energy companies against cyberattacks

MDR’s technology platform, Eos.ii, leverages AI and machine learning methodologies to gather and model real-time energy asset intelligence. This allows Siemens Energy’s cybersecurity experts to monitor, detect and uncover attacks before they execute.

infosec products October 2020

Fleek launches Space, an open source, private file storage and collaboration platform

Space’s mission is to enable a fully private, peer to peer (p2p) file and work collaboration experience for users. Space is built on Space Daemon, the open source framework, and backend of the platform. Space Daemon enables other apps, similar to Space, to build privacy-focused, encrypted p2p apps.

infosec products October 2020

AWS launches Amazon Timestream, a serverless time series database for IoT and operational applications

Amazon Timestream simplifies the complex process of data lifecycle management with automated storage tiering that stores recent data in memory and automatically moves historical data to a cost-optimized storage tier based on predefined user policies.

infosec products October 2020

AWS launches Amazon Timestream, a serverless time series database for IoT and operational applications

Amazon Web Services (AWS) announced the general availability of Amazon Timestream, a new time series database for IoT and operational applications that can scale to process trillions of time series events per day up to 1,000 times faster than relational databases, and at as low as 1/10th the cost.

Amazon Timestream

Amazon Timestream saves customers effort and expense by keeping recent data in-memory and moving historical data to a cost-optimized storage tier based upon user-defined policies, while its query processing gives customers the ability to access and combine recent and historical data transparently across tiers with a single query, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier.

Amazon Timestream’s analytics features provide time series-specific functionality to help customers identify trends and patterns in data in near real time. Because Amazon Timestream is serverless, it automatically scales up or down to adjust capacity based on load, without customers needing to manage the underlying infrastructure.

There are no upfront costs or commitments required to use Amazon Timestream, and customers pay only for the data they write, store, or query.

Today’s customers want to build IoT, edge, and operational applications that collect, synthesize, and derive insights from enormous amounts of data that change over time (known as time series data). For example, manufacturers might want to track IoT sensor data that measure changes in equipment across a facility, online marketers might want to analyze clickstream data that capture how a user navigates a website over time, and data center operators might want to view data that measure changes in infrastructure performance metrics.

This type of time series data can be generated from multiple sources in extremely high volumes, needs to be cost-effectively collected in near real time, and requires efficient storage that helps customers organize and analyze the data.

To do this today, customers can either use existing relational databases or self-managed time series databases. Neither of these options are attractive. Relational databases have rigid schemas that need to be predefined and are inflexible if new attributes of an application need to be tracked.

For example, when new devices come online and start emitting time series data, rigid schemas mean that customers either have to discard the new data or redesign their tables to support the new devices, which can be costly and time-consuming.

In addition to rigid schemas, relational databases also require multiple tables and indexes that need to be updated as new data arrives and lead to complex and inefficient queries as the data grows over time.

Additionally, relational databases lack the required time series analytical functions like smoothing, approximation, and interpolation that help customers identify trends and patterns in near real time.

Alternatively, time series database solutions that customers build and manage themselves have limited data processing and storage capacity, making them difficult to scale. Many of the existing time series database solutions fail to support data retention policies, creating storage complexity as data grows over time.

To access the data, customers must build custom query engines and tools, which are difficult to configure and maintain, and can require complicated, multi-year engineering initiatives. Furthermore, these solutions do not integrate with the data collection, visualization, and machine learning tools customers are already using today. The result is that many customers just don’t bother saving or analyzing time series data, missing out on the valuable insights it can provide.

Amazon Timestream addresses these challenges by giving customers a purpose-built, serverless time series database for collecting, storing, and processing time series data. Amazon Timestream automatically detects the attributes of the data, so customers no longer need to predefine a schema.

Amazon Timestream simplifies the complex process of data lifecycle management with automated storage tiering that stores recent data in memory and automatically moves historical data to a cost-optimized storage tier based on predefined user policies.

Amazon Timestream also uses a purpose-built adaptive query engine to transparently access and combine recent and historical data across tiers with a single SQL statement, without having to specify which storage tier houses the data. This enables customers to query all of their data using a single query without requiring them to write complicated application logic that looks up where their data is stored, queries each tier independently, and then combines the results into a complete view.

Amazon Timestream provides built-in time series analytics, with functions for smoothing, approximation, and interpolation, so customers don’t have to extract raw data from their databases and then perform their time series analytics with external tools and libraries or write complex stored procedures that not all databases support.

Amazon Timestream’s serverless architecture is built with fully decoupled data ingestion and query processing systems, giving customers virtually infinite scale and the ability to grow storage and query processing independently and automatically, without requiring customers to manage the underlying infrastructure.

In addition, Amazon Timestream integrates with popular data collection, visualization, and machine learning tools that customers use today, including services like AWS IoT Core (for IoT data collection), Amazon Kinesis and Amazon MSK (for streaming data), Amazon QuickSight (for serverless Business Intelligence), and Amazon SageMaker (for building, training, and deploying machine learning models quickly), as well as open source, third-party tools like Grafana (for observability dashboards) and Telegraf (for metrics collection).

“What we hear from customers is that they have a lot of insightful data buried in their industrial equipment, website clickstream logs, data center infrastructure, and many other places, but managing time series data at scale is too complex, expensive, and slow,” said Shawn Bice, VP, Databases, AWS.

“Solving this problem required us to build something entirely new. Amazon Timestream provides a serverless database service that is purpose-built to manage the scale and complexity of time series data in the cloud, so customers can store more data more easily and cost effectively, giving them the ability to derive additional insights and drive better business decisions from their IoT and operational monitoring applications.”

Amazon Timestream is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland), with availability in additional regions in the coming months.

The Guardian Life Insurance Company of America® (Guardian Life) is a Fortune 250 mutual company and a leading provider of life, disability, dental, and other benefits for individuals, at the workplace, and through government sponsored programs.

“Our team is building applications that collect and process metrics from our build systems and artifact repositories. We currently store this data in a self-hosted time series database,” said Eric Fiorillo, Head of Application Platform Strategy, Guardian Life.

“We started evaluating Amazon Timestream for storing and processing this data. We’re impressed with Amazon Timestream’s serverless, autoscaling, and data lifecycle management capabilities. We’re also thrilled to see that we can visualize our time series data stored in Amazon Timestream with Grafana.”

Autodesk is a global leader in software for architecture, engineering, construction, media and entertainment, and manufacturing industries. “At Autodesk, we make software for people who make things. This includes everything from buildings, bridges, roads, cars, medical devices, and consumer electronics, to the movies and video games that we all know and love,” said Scott Reese, SVP of Manufacturing, Cloud, and Production Products, Autodesk.

“We see that Amazon Timestream has the potential to help deliver new workflows by providing a cloud-hosted, scalable time series database. We anticipate that this will improve product performance and reduce waste in manufacturing. The key differentiator that excites us is the promise that this value will come without adding a data management burden for the customers nor Autodesk.”

PubNub’s Realtime Communication Platform processes trillions of messages per month on behalf of thousands of customers and millions of end users.

“To effectively operate the PubNub platform it is essential to monitor the enormous number of high-cardinality metrics that this traffic generates. As our traffic volumes and the number of tracked metrics have grown over time the challenges of scaling our self-managed monitoring solution have grown as well, and it is prohibitively expensive for us to use a SaaS monitoring solution for this data. Amazon Timestream has helped address both of these needs perfectly,” said Dan Genzale, Director of Operations, PubNub.

“We’ve been working with AWS as a Timestream preview customer, providing feedback throughout the preview process. AWS has built an amazing product in Timestream, in part by incorporating PubNub’s feedback. We truly appreciate the fully-managed and autoscaling aspects that we have come to expect of AWS services, and we’re delighted that we can use our existing visualization tools with Amazon Timestream.”

Since 1998, Rackspace Technology has delivered enterprise-class hosting, professional services, and managed public cloud for businesses of all sizes and kinds around the world.

“At Rackspace, we believe Amazon Timestream fills a longstanding need for a fully managed service to capture time series data in a cloud native way. In our work with Amazon Timestream we’ve observed the platform to be performant and easy to use, with a developer experience that is familiar and consistent with other AWS services,” said Eric Miller, Senior Director of Technical Strategy, Rackspace Technology.

“Cloud Native and IoT are both core competencies for us, so we’re very pleased to see that Amazon Timestream is 100% serverless, and that it has tight integration with AWS IoT Core rule actions to easily ingest data without any custom code. Organizations who have a use case to capture and process time series data should consider using AWS Timestream as a scalable and reliable solution.”

Cake is a performance marketing software company that stores and analyzes billions of clickstream events. “Previously we used a DIY time series solution that was cumbersome to manage and was starting to tip over at scale,” said Tyler Agee, Principal Architect, Cake Software.

“When we heard AWS was building a time series database service—Amazon Timestream—we signed up for the preview and started testing our workloads. We’ve worked very closely with the AWS service team, giving them feedback and data on our use case to help ensure Amazon Timestream really excels in production for the size and scale of time series data we’re dealing with.

“The result is phenomenal—a highly scalable and fully serverless database. It’s the first time we’ve had a single solution for our time series data. We’re looking forward to continuing our close work with AWS and cannot wait to see what’s in store for Amazon Timestream.”

Trimble Inc., is a leading technology provider of productivity solutions for the construction, resources, geospatial, and transportation industries. “Whenever possible, we leverage AWS’s managed service offerings. We are excited to now use Amazon Timestream as a serverless time series database supporting our IoT monitoring solution,” said David Kohler, Engineering Director, Trimble. “Timestream is purpose-built for our IoT-generated time series data, and will allow us to reduce management overhead, improve performance, and reduce costs of our existing monitoring system.”

With over 60 years of fashion retailing experience, River Island is one of the most well known and loved brands with over 350 stores across Europe, Asia, and the Middle East, and six dedicated online sites operating in four currencies.

“The Cloud Engineering team have been excited about the release of Amazon Timestream for some time. We’ve struggled to find a time series data store that is simple, easy, and affordable,” said Tonino Greco, Head of Cloud and Infrastructure, River Island.

“With Amazon Timestream we get that and more. Amazon Timestream will enable us to build a central monitoring capability across all of our heritage systems, as well as our AWS hosted microservices. Interesting times!”

D2L is a global leader in educational technology, and the pioneer of the Brightspace learning platform used by customers in K-12, higher education, healthcare, government, and the corporate sector.

“Our team is excited to use Amazon Timestream for our internal synthetic monitoring tool, which currently stores data in a relational database,” said Andrew Alkema, Sr. Software Developer, D2L.

“By switching to Amazon Timestream, a fully managed time series database, we can maintain performance while reducing cost by over 80%. Timestream’s built-in storage tiering and configurable data retention policies are game-changers, and will save our team a lot of time spent on mundane activities.”

Fleetilla is a leading provider of end-to-end solutions for managing trailers, land-based intermodal containers, construction equipment, unpowered assets, and conventional commercial telematics for over-the-road vehicles.

“Fleetilla works with real-time telematics data from IoT devices around the world. Recently we saw a need to integrate a variety of different data feeds to provide a unified ‘single pane of glass’ view for complex mixed fleet environments. We are using Amazon Timestream to provide a cost-effective database system which will replace our existing complex solution composed of multiple other tools,” said Marc Wojtowicz, VP of IT and Cloud Services, Fleetilla.

“The fully managed Amazon Timestream service means less work for our DevOps team, the SDKs available in our preferred programming language mean simpler implementation for our developers, and the familiar SQL-based language means less learning curve for our data analysts. Timestream’s built-in scalability and analytics features allow us to offer faster and richer experiences to our customers, and the machine learning integration allows us to continue innovating and improving our services for our customers.”

Anitian unveils SecureCloud on AWS, enabling rapid and secure deployment of mission-critical apps

Anitian announced SecureCloud, a new pre-engineered security service on Amazon Web Services (AWS). SecureCloud addresses a daunting challenge for business, DevOps, and security leaders: rapid deployment of applications and services to customers – without sacrificing security measures or privacy protections.

With SecureCloud, customers can:

  • Save time and money with a complete, pre-engineered security service that automates the complex, error-prone process of architecting, configuring, and deploying disparate security tools.
  • Bring new cloud workloads to market – securely and rapidly.
  • Free up business, DevOps, and security teams to focus on their core business functions – while enabling them to adapt to new ways of working.

“We greatly benefited from Anitian’s Compliance Automation Platform to migrate our application to the AWS cloud and achieve our FedRAMP authorization,” said Ignacio Martinez, vice president of security, risk, and compliance for Smartsheet.

“We look forward to further collaborating with Anitian and their pre-engineered SecureCloud as we continue to expand our government business.”

A recent forecast for 2020 Worldwide Security and Risk Management Spending by Gartner noted that the COVID-19 pandemic is driving demand in areas such as cloud adoption, remote worker technologies and cost saving measures.

At 33.3% worldwide growth, the Cloud Security segment represents the fastest growing segment. And Forrester Research’s Cloud Security Solutions Forecast, 2018 To 2023 predicts that global spending on cloud security tools will top $12.6 billion in 2023, with a focus on public cloud native platform security.

“We hear consistently from customers about their fatigue with procuring, configuring, deploying, and integrating a disparate set of ‘best-of-breed’ security tools and controls. Companies like Amazon, Microsoft and Google have already paved the way customers procure and use cloud services instead of building their own ‘best-of-breed’ services,” said Rakesh Narasimhan, Anitian’s CEO.

“Anitian’s SecureCloud is the next evolution for customers to shift their mission-critical apps into a pre-engineered and secure environment with the most stringent security standards and controls. Given the dynamics of today’s environment, businesses are leveraging the cloud more than ever to seamlessly enable more ways of working. With SecureCloud, security transforms from an impediment to an enabler that accelerates a customer’s business.”

“Anitian has been a key player helping our customers achieve compliance with standards such as FedRAMP and those of the Payment Card Industry (PCI), while also taking full advantage of the scale, security and innovation AWS offers,” said Sandy Carter, Vice President of Public Sector Partners and Programs at AWS.

Achieve cloud security and compliance quickly with SecureCloud

SecureCloud wraps a complete set of critical security technologies around a cloud application in hours. Using Anitian’s unique automation technologies, SecureCloud configures, deploys, and hardens a comprehensive stack of security tools and controls – including endpoint security, remote access, multi-factor authentication, encryption, vulnerability management, zero-trust networking, and security information and event management (SIEM).

DevOps teams can stop struggling with confusing security configurations or access rights. SecureCloud provides a complete solution enabling developers to focus on their creativity for features and functionality development.

Security leaders no longer need to be concerned about intricate configurations or lengthy policy documents. SecureCloud automates this tedious work – with pre-engineered controls, and pre-defined templates. Now, security leaders can focus on risk management and corporate security governance.

Business leaders no longer need to worry about their business being secure. SecureCloud is built to exacting security standards, like FedRAMP, PCI, CMMC, GDPR, or ISO 27001. Even if your business doesn’t require these compliance standards, you can accelerate market-entry for new applications while having the confidence that security is enforced, by default and by design.

Trustwave Fusion platform now also hosted on Amazon Web Services GovCloud

Trustwave announced the Trustwave Fusion platform is now also hosted on Amazon Web Services (AWS) GovCloud, providing U.S. government agencies and suppliers threat detection and response services to help address the constantly shifting threat landscape while meeting stringent U.S. Federal government security requirements.

The cloud-native Trustwave Fusion platform delivers the first U.S.-only managed threat detection and response services hosted on AWS GovCloud and is in the process of FedRAMP authorization. The Trustwave Fusion platform is the cornerstone of the company’s managed security services, products and other cybersecurity offerings.

“The scale and scope of government cybersecurity challenges are bigger than ever,” said Bill Rucker, president, Trustwave Government Solutions.

“The adversarial landscape is so complex, and agencies continue to face a massive cyber workforce gap. As mobility and cloud widen the attack surface, user behavior patterns have become more difficult to monitor. By unifying powerful threat detection and response services and technologies with some of the top talent in cybersecurity, Trustwave can help agencies respond to attackers’ evolving tactics.”

Helping agencies gain network visibility

One major finding of the U.S. Office of Management and Budget’s (OMB) Federal Cybersecurity Risk Determination Report and Action Plan – released in May 2018 – was that a majority of agencies lack sufficient visibility into what is happening on their network.

OMB mandated that agencies must submit an enterprise-level Cybersecurity Operations Maturation Plan, as well as complete Security Operation Center (SOC) maturation, consolidation or migration to SOC-as-a-Service by September 2020.

The Trustwave Fusion platform helps agencies on this journey, connecting their digital footprints to a robust security cloud comprised of the Trustwave data lake, advanced analytics, actionable threat intelligence, a wide range of security services and products and staffed by U.S. citizens, including Trustwave SpiderLabs, the company’s elite team of security specialists.

The platform unifies these capabilities onto a single, easy-to-use interface that can be accessed and managed via desktop, tablet or mobile phone. Agencies and suppliers can manage complex security programs and scale resources as needed with simple point-and-click navigation.

Compliance with government security requirements

The Trustwave Fusion platform runs completely in-country and enforces a “U.S. eyes only” policy, helping ensure that prime contractors and the cyber supply chain are secure.

Trustwave Government Solutions is a FOCI-mitigated entity with a Superior rating from the Defense Counterintelligence and Security Agency (DCSA), the highest-level rating awarded to private sector companies.

he platform enables customers to adhere to International Traffic in Arms (ITAR) regulations, FedRAMP requirements, Defense Federal Acquisition Regulation Supplement (DFARS), as well as DoD Impact Levels 2, 4 and 5 and Cybersecurity Maturity Model Certification (CMMC) requirements.

Hybrid security operations

As agencies continue to deploy and manage complex multi-cloud environments, many lack the skilled cyber resources to do so in-house.

Through APIs and Information Technology Infrastructure Library (ITIL)-based service management, the Trustwave Fusion platform tears down walls between Trustwave Managed Threat Detection and Response services, security testing services and an agency’s own SOC.

On-demand access to threat hunting and powerful threat intelligence

Agencies have access to advanced threat hunters and actionable threat intelligence derived from the global network of Trustwave Security Operation Centers and the Trustwave SpiderLabs Fusion Center, a leading-edge security command center. These facilities identify, collect and track the latest vulnerabilities, malware strains and adversary tactics.

Complete visibility and centralized control

The Trustwave Fusion platform offers a single dashboard view of threats, technology management, vulnerabilities and perceived risks across an organization’s entire environment.

Built using Security Orchestration, Automation and Response (SOAR) layers, the platform uses advanced analytics, machine learning and automation to improve incident accuracy and response.

Support for third-party data and products

The Trustwave Fusion platform integrates data lakes, technology actions and threat intelligence stemming from third-party sources into an agency’s environment to further strengthen its cybersecurity posture.

“As the threat landscape grows more challenging, the Federal government continues to struggle with complex environments, myriad legacy systems and a lack of resources to meet the issue head-on,” said Kevin Kerr, chief information security officer, Oak Ridge National Laboratory.

“A shift toward managed threat detection and response, and virtual, hybrid SOC environments give agencies the visibility and cyber defense support they need to improve their security postures and advance their missions.”

CrowdStrike enhances services for AWS

CrowdStrike announced the expansion of support for Amazon Web Services (AWS) with new capabilities that deliver integrations for the compute services and cloud services categories. Through these expanded services, CrowdStrike is enhancing development, security and operations (DevSecOps) to enable faster and more secure innovation that is easier to deploy.

The expanded capabilities that CrowdStrike is delivering support the growing needs of today’s cloud-first businesses that are conducting business and innovating in the cloud. The CrowdStrike Falcon platform delivers advanced threat protection and comprehensive visibility that scale to secure cloud workloads and container deployments across organizations.

This enables enterprises to accelerate their digital transformation while protecting their businesses against the nefarious activity of sophisticated threat actors. The expanded support delivers customers comprehensive insight across different compute services, secure communication across deployment fleet, automatic workload discovery and comprehensive cloud visibility across multiple accounts.

“As security becomes an earlier part of the development cycle, development teams must be equipped with solutions that allow them to quickly and effectively build from the ground up the strength and protection needed for the evolving threat landscape,” said Amol Kulkarni, chief product officer of CrowdStrike. “Through our growing integrations with our strong collaboration with AWS, CrowdStrike is providing security teams the scale and tools needed to adopt, innovate and secure technology across any workload with speed and efficiency, making it easier to address security issues in earlier phases of development and providing better, holistic protection and uptime for end users.”

AWS Graviton – CrowdStrike provides cloud-native workload protection for Amazon Elastic Compute Cloud (Amazon EC2) A1 instances powered by AWS Graviton Processors, as well as the C6g, M6g and R6g Amazon EC2 instances based on the new Graviton2 Processors. With the Falcon lightweight agent, customers receive the same seamless protection and visibility across different compute instance types with minimal impact on runtime performance. CrowdStrike Falcon secures Linux workloads running on ARM with no requirements for reboots, “scan storms” or invasive signature updates.

Amazon WorkSpaces – Amazon WorkSpaces is a fully managed, Desktop-as-a-Service (DaaS) solution that provides users with either Windows or Linux desktops in just a few minutes and can quickly scale to provide thousands of desktops to workers across the globe. CrowdStrike brings its industry-leading prevention and detection capabilities that include machine learning (ML), exploit prevention and behavioral detections to Amazon WorkSpaces, supporting remote workforces without affecting business continuity.

Bottlerocket – Bottlerocket, a new Linux-based open source operating system purpose-built by AWS for running containers on virtual machines or bare metal hosts and designed to improve security and operations of organizations’ containerized infrastructure. CrowdStrike Falcon will provide run-time protection, unparalleled endpoint detection and response (EDR) visibility and container awareness, enabling customers to further secure their applications running on Bottlerocket.

BAE Systems delivers anti-money laundering regulatory compliance solutions created on AWS

BAE Systems announced a new offering created on Amazon Web Services (AWS) to deliver complete anti-money laundering regulatory compliance solutions.

The solution is supported by the availability, reliability and security of AWS and offers banks and financial institutions the opportunity to quickly stand up an affordable integrated financial crime regulatory compliance solution.

Through this implementation, BAE Systems will provide customers with advisory services, as well as implementation, migration, and management of regulatory and compliance solutions on AWS. By building on AWS, BAE Systems Applied Intelligence offers a flexible commercial model with no upfront costs – minimising an organisation’s capital expenditure and maximising ROI.

Customers will connect quickly with standard regulatory compliance data interfaces, designed specifically for their industry and territory, significantly reducing the effort of internal IT teams with standard data interfaces and full service management.

Once deployed, service levels include hardware and software availability, security patches, support responsiveness, system upgrades, support and maintenance.

Last month, BAE Systems Applied Intelligence announced NetReveal 360°, a complete regulatory compliance solution, packaged to operationalise quickly. Out of the box, customers receive a specifically designed service for the organisation, which includes end-to-end solutions for Customer Due Diligence (CDD), Anti Money Laundering (AML), and Watchlist Management (WLM).

Provisioning, management, and support of both the business solutions and underlying AWS infrastructure is completed by BAE Systems Applied Intelligence to provide customers with a single point of contact and with clear responsibility.

Financial institutions want to focus on delivering outstanding services and experiences to customers and growing their organisations, but at the same time they need to ensure they adhere to the latest regulations and avoid regulatory fines.

In smaller organisations, the challenge is balancing these two things – navigating changing regulations while making the best use of investigative teams in tackling financial crime.

If you are a smaller or emerging financial institution, NetReveal 360° offers an affordable and rapid go-live – for the key elements of fighting financial crime. Larger banks which are looking to deploy standard set ups quickly can enjoy the same benefits that would be gained by smaller banks

With over 20+ years’ experience in financial crime regulatory compliance, BAE Systems Applied Intelligence has a deep understanding of what organisations require to fully comply with applicable anti-money laundering regulations.

At the same time, we understand the challenges financial organisations face in managing a regulatory compliance solution – specifically the need to stand up an affordable integrated financial crime regulatory compliance solution quickly, efficiently and cost-effectively.

Garry Harrison, Managing Director of Financial Services at BAE Systems Applied Intelligence said, “The importance of cloud has never been greater as we continue to outmaneuver the uncertainty caused by the global pandemic; living with increasing levels of financial crime and cyber breaches.

“Cloud technology is vital to helping companies unlock greater efficiency, elasticity and innovation, and drive enduring business change at speed and scale. We are easing the burden of all financial institutions, both large and small, to become and remain compliant against increasing complex regulatory requirements.

“We chose to build on AWS because of their deep technical expertise and global scale. With a strengthened collaboration with AWS, we further enhance our position as a leader in financial services.”

Kryon unveils cloud-based Full Cycle Automation-as-a-Service platform powered by Amazon Web Services

Kryon launched the industry’s first cloud-based Full Cycle Automation-as-a-Service (FCAaaS) platform. Powered by Amazon Web Services (AWS), Kryon’s FCAaaS pushes the boundaries of automation by combining Process Discovery, RPA, and actionable analytics in one unified platform.

Users can be up and running with the cloud-based solution within 24 hours without extensive technical knowledge or RPA background. Organizations can quickly and easily scale up automation bots on demand whenever capacity is needed, and even have the robot in production within three weeks, thanks to the power of Kryon technology and the cloud.

Earlier this year, Kryon partnered with Virtual AI on its first iteration of cloud-based Full-Cycle Automation. With the launch of FCAaaS as a standalone platform, Kryon takes this concept to an entirely new level, from discovering and mapping processes through RPA deployment, and productivity optimization to continual analysis for superior results.

Key benefits of Kryon FCAaaS include:

  • Frictionless, intuitive interface: Easy-to-understand, transparent user experience is designed to eliminate barriers to entry with no financial, recruitment, or training issues to slow down implementation.
  • Fastest implementation in the industry: Kryon is the only company on the market to commit to a robot in production within three weeks. Users can kickstart their automation journey with immediate set-up and have a functional bot in production in record time without the need to recruit and train a specialist RPA team.
  • Lowest cost of ownership in the industry: In addition to the lowest up-front investment in the RPA sector, the AWS-powered, cloud-based delivery eliminates the high costs of server infrastructure and its associated maintenance.
  • Unprecedented scalability and flexibility: Customers can quickly and easily scale up bots on demand whenever extra automation capacity is needed for a true high-availability solution.
  • Professional services: New users can leverage the highest-rated, hands-on training in the industry, as well as professional services that handle implementation, migration, testing, and automation development.

“Just as Kryon brought Full-Cycle Automation—the single best way to ensure a successful automation project—to the market, FCAaaS is designed to deliver on the promise of RPA where other solutions have failed,” said Harel Tayeb, CEO of Kryon.

“Using the power of the cloud to scale and accelerate automation initiatives, the cloud-based platform has an immediate impact, helping businesses become more agile, productive and profitable, and responsive to rapidly changing market conditions and the exponential growth of remote work operations.”

AWS io2: Provisioned IOPS SSD volumes for Amazon Elastic Block Store

Amazon Web Services (AWS), an Amazon Company, announced general availability of io2, the next generation Provisioned IOPS SSD volumes for Amazon Elastic Block Store (Amazon EBS).

The new io2 volume is designed for 100x higher volume durability (99.999%) when compared to the 99.9% durability offered by io1 Amazon EBS volumes. Higher volume durability reduces the likelihood of storage failures and makes the primary copy of customers’ data more resilient, resulting in better application availability.

With io2, customers can drive 10x higher input/output operations per second (IOPS) from their provisioned storage at the same price as io1, so performance improves significantly without increasing storage cost.

io2 is ideal for performance-intensive, business critical applications that need higher availability like ERP, CRM, and online transaction systems and the databases like SAP HANA, Oracle, Microsoft SQL Server, IBM DB2, Apache Cassandra, MySQL, and PostreSQL that back them.

Amazon EBS already offers four different volume types at various price points and performance benchmarks to support virtually any application that customers want to run in the cloud, including relational and non-relational databases, enterprise applications, and big data analytics.

Customers specifically choose io1 volumes (the previous highest performance EBS volumes) to run their critical, performance-intensive applications, like SAP HANA, Microsoft SQL Server, Splunk, Apache Cassandra, IBM DB2, MySQL, PostgreSQL, and Oracle databases.

For these applications, customers want greater volume durability to improve the resiliency of their primary data and get better application availability.

Additionally, to meet their IOPS requirements for these applications, customers often provision more storage than needed, resulting in higher spend than they would otherwise incur. For these applications, customers only want to provision and pay for the storage that they actually need.

Next-generation io2 volumes are designed to deliver 100x higher volume durability (from 99.9% to 99.999%), enhancing availability of business-critical applications.

io2 volumes are priced the same as io1 volumes, keeping the same predictable cost for EBS customers, but now support 10x higher IOPS-to-storage ratio and up to 500 IOPS for every provisioned GB, so that customers can get more performance without increasing their storage spend.

Similar to io1, io2 is designed to provide maximum performance of 64,000 IOPS, 1000 MB/s throughput, and single-digit millisecond latencies. Customers can create new io2 volumes or easily upgrade their existing volumes to io2 volumes using Elastic Volumes, which customers use modify the volume type without any downtime for applications running on their Amazon Elastic Compute Cloud (Amazon EC2) instances.

“Customers rely on highly durable AWS block storage to keep their business-critical applications running at any scale,” said Mai-Lan Tomsen Bukovec, Vice President, Block and Object Storage, AWS.

“Today, we are excited to announce new high durability io2 volumes, that provide existing customers 100x higher volume durability than io1 at no additional cost. For new customers where five nines of storage durability is critical to migrate on-premises business critical applications to AWS, io2 brings together performance, durability, and agility all in a single EBS volume.”

Customers can create new io2 volumes with just a few clicks using the AWS Management Console, AWS Command Line Interface, or AWS SDKs. io2 volumes are available today in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and Middle East (Bahrain), with more regions coming soon.

Bristol Myers Squibb is a global biopharmaceutical company whose mission is to discover, develop, and deliver innovative medicines that help patients prevail over serious diseases. “We love the performance benefits of io1 and use it for simulations that require high storage performance,” said Mohammad Shaikh, Director, Scientific Computing Services, Cloud Computing & DevOps, at Bristol Myers Squibb.

“However, we need to provision more storage to meet the IOPS requirements, adding to our cost. But with the 10x increase in IOPS per GB ratio, we can easily enable peak performance at a much lower cost than we ever could with traditional SAN hardware vendors.”

Salesforce enables companies of every size and industry to take advantage of powerful technologies—cloud, mobile, social, internet of things, artificial intelligence, voice, and blockchain—to create a 360° view of their customers.

“Our commitment to our core value of customer success includes ensuring Salesforce applications are available whenever our customers need them,” said Paul Constantinides, Executive Vice President of Engineering, at Salesforce.

“Delivering an always-on experience requires highly reliable storage. With AWS and EBS new io2 volumes designed for five nines durability, we will continue to meet and exceed our customers’ expectations.”

Cloudreach, a Blackstone portfolio company, specializes in implementing and managing public cloud solutions for enterprise customers across Europe and North America.

“Cloudreach has been using AWS for over a decade, and we love the incredible innovation shown by Amazon EBS in terms of feature releases and the ability to act on customer requests. The impact of downtime for our customers is high, and io1 volumes have been great in minimizing these risks.

“We rely on AWS’ Multi-AZ architecture to keep our platforms running 24×7, but with increased volume durability we can further improve our application uptime,” said Chris Bunch, GM for the AWS Practice at Cloudreach.

“io2 is awesome for us, because the 100x higher volume durability further reduces the risk of downtime, and the new IOPS to GB ratio means we’re actually paying less for performance per GB.”

Rapid7 is a leading provider of cloud security analytics and automation. “Historically, we’ve used EBS gp2 volumes to deliver block storage at low cost for our Product Analytics platform, which analyzes customer behavior to help us make more intelligent product decisions and innovate faster.

“As our customer base has grown, we have started relying more and more on the availability of this platform to guide our strategic decisions. So, higher volume durability is critical for minimizing any downtime,” said Ulrich Dangel, Principal Infrastructure Architect, at Rapid7.

”We love that in addition to gp2 volumes for our general purpose storage for EC2 instance, we now have the option of using io2 volumes for our highest performance applications, which have helped us increase our data durability by 100x without redesigning the platform.”

Aviatrix cloud network platform serves as a Network Factory for new and existing AWS accounts

Aviatrix announced new capabilities to automate enterprise network infrastructure deployment and operations for organizations using the account factory feature of AWS Control Tower.

Available now in AWS Marketplace, the integrated Aviatrix software serves as the network factory for newly provisioned accounts on Amazon Web Services (AWS). The solution helps enterprise organizations meet cloud governance and regulatory initiatives for their multi-account, multi-region cloud networks.

“With AWS Control Tower, it only takes a few clicks for enterprise organizations to provision new AWS accounts that conform to company-wide policies,” said Chris Grusz, Director, AWS Marketplace, Amazon Web Services, Inc.

“The Aviatrix cloud networking solution uniquely offers a network factory for AWS Control Tower. With AWS Control Tower account factory ensuring account control governance, customers will benefit from Aviatrix’s new capabilities that make certain the network infrastructure supporting those accounts is secure and correctly deployed every time.”

AWS Control Tower provides the easiest way to set up and govern new, secure, multi-account AWS environments. Additionally, AWS Service Catalog enables organizations to create and manage catalogs of approved IT services for use on AWS.

The Aviatrix cloud network platform provides the prescriptive transit network architecture and operational visibility that meets enterprise cloud networking and security requirements.

The Aviatrix software is able to create Amazon Virtual Private Clouds (Amazon VPCs), automate route table updates and ensure network correctness by preventing overlapping CIDRs, while also enforcing IT security controls through policy-based network segmentation.

The integrated solution enables self-service delivery of new account infrastructure, provides detailed operational visibility, and helps ensure IT networking and security teams achieve compliance through a simple, automated workflow.

“In the journey to leverage cloud-based infrastructure we are always on the lookout for ways to streamline the automated delivery of services, in the simplest, most secure manner,” said Justin Donohoo, CTO at Observian.

“The Aviatrix cloud network platform delivers the network architecture, advanced network, security controls, and operational visibility enterprises need. I view the integration of Aviatrix with AWS Control Tower as a natural evolution to streamline the delivery of new AWS infrastructure.

“Leveraging Aviatrix as the network factory and the AWS Control Tower account factory ensures that both AWS accounts and enterprise network infrastructure are deployed correctly and in compliance with corporate policies and industry regulations.”

Provisioning cloud network services is often easier than operationalizing them. AWS Control Tower customers who need to be in full control of their cloud network environment can also use the Aviatrix software to access the network intelligence they need – all in one place – instead of logging into multiple screens, services and management consoles to stitch together the network visibility they require.

Aviatrix CoPilot – an available component of the integrated solution for AWS Control Tower – provides a global view of the cloud environment by leveraging intelligence and analytics from the Aviatrix cloud network platform combined with telemetry and network information gathered using native AWS APIs.

CoPilot’s rich, intuitive visualizations provide actionable insights to practitioners by using dynamic multi-cloud topology maps, FlowIQ intelligent traffic flow analytics, global traffic heat maps, time series trend charts and a complete cloud network monitoring dashboard.

“AWS Control Tower is a leap forward for enterprise IT teams who are tasked with maintaining control over their networks, without becoming a bottleneck to their organization’s cloud agility and speed,” said Nauman Mustafa, VP of Solution Engineering at Aviatrix.

“Regulatory compliance and corporate governance must be automated and adopted for the cloud era. The combination of AWS Control Tower and Aviatrix cloud network platform delivers the simplicity and automation along with enterprise visibility and control.”

Global public cloud services market grew 26% YOY in 2019 with revenues totaling $233.4 billion

The worldwide public cloud services market, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), grew 26% year over year in 2019 with revenues totaling $233.4 billion, according to IDC.

public cloud services market 2019

Spending continued to consolidate in 2019 with the combined revenue of the top 5 public cloud service providers (Amazon Web Services, Microsoft, Salesforce.com, Google, and Oracle) capturing more than one third of the worldwide total and growing 36% year over year.

“Cloud is expanding far beyond niche e-commerce and online ad-sponsored searches. It underpins all the digital activities that individuals and enterprises depend upon as we navigate and move beyond the pandemic,” said Rick Villars, group vice president, Worldwide Research at IDC.

“Enterprises talked about cloud journeys of up to ten years. Now they are looking to complete the shift in less than half that time.”

Public cloud services market has doubled since 2016

The public cloud services market has doubled in the three years since 2016. During this same period, the combined spending on IaaS and PaaS has nearly tripled. This highlights the increasing reliance on cloud infrastructure and platforms for application deployment for enterprise IT internal applications as well as SaaS and digital application delivery.

Spending on IaaS and PaaS is expected to continue growing at a higher rate than the overall cloud market over the next several years as resilience, flexibility, and agility guide IT platform decisions.

“Today’s economic uncertainty draws fresh attention to the core benefits of IaaS – low financial commitment, flexibility to support business agility, and operational resilience,” said Deepak Mohan, research director, Cloud Infrastructure Services.

“Cost optimization and business resilience have emerged as top drivers of IT investment decisions and IaaS offerings are designed to enable both. The COVID-19 disruption has accelerated cloud adoption with both traditional enterprise IT organizations and digital service providers increasing use of IaaS for their technology platforms.”

“Digitizing processes is being prioritized by enterprises in every industry segment and that is accelerating the demand for new applications as well as repurposing existing applications,” said Larry Carvalho, research director, Platform as a Service.

“Modern application platforms powered by containers and the serverless approach are providing the necessary tools for developers in meeting these needs. The growth in PaaS revenue reflects the need by enterprises for tools to accelerate and automate the development lifecycle.”

“SaaS applications remains the largest segment of public cloud spending with revenues of more than $122 billion in 2019. Although growth has slowed somewhat in recent years, the current crisis serves as an accelerator for SaaS adoption across primary and functional markets to address the exponential growth of remote workers,” said Frank Della Rosa, research director, SaaS and Cloud Software.

The combined IaaS and PaaS market

A combined view of IaaS and PaaS spending is relevant because it represents how end customers consume these services when deploying applications on public cloud. In the combined IaaS and PaaS market, Amazon Web Services and Microsoft captured more than half of global revenues.

But there continues to be a healthy long tail, representing over a third of the market. These are typically companies with targeted use case-specific PaaS offerings. The long tail is even more pronounced in SaaS, where nearly three quarters of the spending is captured outside the top 5.