Half of IT teams can’t fully utilize cloud security solutions due to understaffing

There are unrealized gaps between the rate of implementation or operation and the effective use of cloud security access brokers (CASB) within the enterprise, according to a global Cloud Security Alliance survey of more than 200 IT and security professionals from a variety of organization sizes and locations.

utilize cloud security solutions

Utilize cloud security solutions

“CASB solutions have been underutilized on all the pillars but in particular on the compliance, data security, and threat protection capabilities within the service,” said Hillary Baron, lead author and research analyst, Cloud Security Alliance.

“It’s clear that training and knowledge of how to use the products need to be made a priority if CASBs are to become effective as a service or solution,” Baron concluded.

The paper found that while nearly 90% of the organizations surveyed are already using or researching the use of a CASB, 50% don’t have the staffing to fully utilize cloud security solutions, which could be remediated by working with top CASB vendors.

CASBs have yet to become practical for remediation or prevention

More than 30% of respondents reported having to use multiple CASBs to meet their security needs and 34% find solution complexities an inhibitor in fully realizing the potential of CASB solutions.

Overall, CASBs perform well for visibility and detecting behavior anomalies in the cloud but have yet to become practical as a tool for remediation or prevention.

Additional findings

  • 83% have security in the cloud as a top project for improvement
  • 55% use their CASB to monitor user behaviors, while 53% use it to gain visibility into unauthorized access
  • 38% of enterprises use their CASB for regulatory compliance while just 22% use it for internal compliance
  • 55% of total respondents use multi-factor authentication that is provided by their identity provider as opposed to a standalone product in the cloud (20%)

Privacy and security concerns related to patient data in the cloud

The Cloud Security Alliance has released a report examining privacy and security of patient data in the cloud.

patient data cloud

In the wake of COVID-19, health delivery organizations (HDOs) have quickly increased their utilization of telehealth capabilities (i.e., remote patient monitoring (RPM) and telemedicine) to treat patients in their homes. These technology solutions allow for the delivery of patient treatment, comply with COVID-19 mitigation best practices, and reduce the risk of exposure for healthcare providers.

Remote healthcare comes with security challenges

Going forward, telehealth solutions — which introduce high levels of patient data over the internet and in the cloud — can be used to remotely monitor and treat patients who have mild cases of the virus, as well as other health issues. However, this remote environment also comes with an array of privacy and security challenges.

“For health care systems, telehealth has emerged as a critical technology for safe and efficient communications between healthcare providers and patients, and accordingly, it’s vital to review the end-to-end architecture of a telehealth delivery system,” said Dr. Jim Angle, co-chair of CSA’s Health Information Management Working Group.

“A full analysis can help determine whether privacy and security vulnerabilities exist, what security controls are required for proper cybersecurity of the telehealth ecosystem, and if patient privacy protections are adequate.”

The HDO must understand regulations and technologies

With the increased use of telehealth in the cloud, HDOs must adequately and proactively address data, privacy, and security issues. The HDO cannot leave this up to the cloud service provider, as it is a shared responsibility. The HDO must understand regulatory requirements, as well as the technologies that support the system.

Regulatory mandates may span multiple jurisdictions, and requirements may include both the GDPR and HIPAA. Armed with the right information, the HDO can implement and maintain a secure and robust telehealth program.

Why is SDP the most effective architecture for zero trust strategy adoption?

Software Defined Perimeter (SDP) is the most effective architecture for adopting a zero trust strategy, an approach that is being heralded as the breakthrough technology for preventing large-scale breaches, according to the Cloud Security Alliance.

SDP zero trust

“Most of the existing zero trust security measures are applied as authentication and sometimes authorization, based on policy after the termination of Transport Layer Security (TLS) certificates,” said Nya Alison Murray, senior ICT architect and co-lead author of the report.

Network segmentation and the establishment of micro networks, which are so important for multi-cloud deployments, also benefit from adopting a software-defined perimeter zero trust architecture.”

SDP improves security posture

A zero trust implementation using SDP enables organizations to defend new variations of old attack methods that are constantly surfacing in existing network and infrastructure perimeter-centric networking models.

Implementing SDP improves the security posture of businesses facing the challenge of continuously adapting to expanding attack surfaces that are, in turn, increasingly more complex.

Network security implementation issues

The report notes particular issues that have arisen that require a rapid change in the way network security is implemented, including the:

  • Changing perimeter, whereby the past paradigm of a fixed network perimeter, with trusted internal network segments protected by network appliances such as load balancers and firewalls has been superseded by virtualized networks, and the ensuing realization that the network protocols of the past are not secure-by-design.
  • IP address challenge, noting that IP addresses lack any type of user knowledge to validate the trust of the device. With no way for an IP address to have user context, they simply provide connectivity information but do not get involved in validating the trust of the endpoint or the user.
  • Challenge of implementing integrated controls. Visibility and transparency of network connections is problematic in the way networks and cyber security tools are implemented. Today, integration of controls is performed by gathering data in a SIEM for analysis.

How to implement least privilege in the cloud

According to a recent survey of 241 industry experts conducted by the Cloud Security Alliance (CSA), misconfiguration of cloud resources is a leading cause of data breaches.

least privilege cloud

The primary reason for this risk? Managing identities and their privileges in the cloud is extremely challenging because the scale is so large. It extends beyond just human user identities to devices, applications and services. Due to this complexity, many organizations get it wrong.

The problem becomes increasingly acute over time, as organizations expand their cloud footprint without establishing the capability to effectively assign and manage permissions. As a result, users and applications tend to accumulate permissions that far exceed technical and business requirements, creating a large permissions gap.

Consider the example of the U.S. Defense Department, which exposed access to military databases containing at least 1.8 billion internet posts scraped from social media, news sites, forums and other publicly available websites by CENTCOM and PACOM, two Pentagon unified combatant commands charged with US military operations across the Middle East, Asia, and the South Pacific. It configured three Amazon Web Services S3 cloud storage buckets to allow any AWS global authenticated user to browse and download the contents; AWS accounts of this type can be acquired with a free sign-up.

Focus on permissions

To mitigate risks associated with the abuse of identities in the cloud, organizations are trying to enforce the principle of least privilege. Ideally, every user or application should be limited to the exact permissions required.

In theory, this process should be straightforward. The first step is to understand which permissions a given user or application has been assigned. Next, an inventory of those permissions actually being used should be conducted. Comparing the two reveals the permission gap, namely which permissions should be retained and which should be modified or removed.

This can be accomplished in several ways. The permissions deemed excessive can be removed or monitored and alerted on. By continually re-examining the environment and removing unused permissions, an organization can achieve least privilege in the cloud over time.

However, the effort required to determine the precise permissions necessary for each application in a complex cloud environment can be both labor intensive and prohibitively expensive.

Understand native IAM controls

Let’s look at AWS, since it is the most popular cloud platform and offers one of the most granular Identity and Access Management (IAM) systems available. AWS IAM is a powerful tool that allows administrators to securely configure access to AWS cloud resources. With over 2,500 permissions (and counting), IAM gives users fine-grained control over which actions can be performed on a given resource in AWS.

Not surprisingly, this degree of control introduces an equal (some might say greater) level of complexity for developers and DevOps teams.

In AWS, roles are used as machine identities. To grant an application-specific permission requires attaching access policies to the relevant role. These can be managed policies, created by the cloud service provider (CSP), or inline policies, created by the AWS customer.

Reign in roles

Roles, which can be assigned more than one access policy or serve more than one application, make the journey to least-privilege more challenging.

Here are several scenarios that illustrate this point.

1. Single application – single role: where an application uses a role with different managed and inline policies, granting privileges to access Amazon ElastiCache, RDS, DynamoDB, and S3 services. How do we know which permissions are actually being used? And once we do, how do we right-size the role? Do we replace managed policies with inline ones? Do we edit existing inline policies? Do we create new policies of our own?

2. Two applications – single role: where two different applications share the same role. Let’s assume that this role has access permissions to Amazon ElastiCache, RDS, DynamoDB and S3 services. But while the first application is using RDS and ElastiCache services, the second is using ElastiCache, DynamoDB, and S3. Therefore, to achieve least-privilege the correct action would be role splitting, and not simply role right-sizing. In this case, role-splitting would be followed by role right-sizing, as a second step.

3. Role chaining occurs when an application uses a role that does not have any sensitive permissions, but this role has the permission to assume a different, more privileged role. If the more privileged role has permission to access a variety of services like Amazon ElastiCache, RDS, DynamoDB, and S3, how do we know which services are actually being used by the original application? And how do we restrict the application’s permissions without disrupting other applications that might also be using the second, more privileged role?

One native AWS tool called Access Advisor allows administrators to investigate the list of services accessed by a given role and verify how it is being used. However, relying solely on Access Advisor does not connect the dots between access permissions and individual resources required to address many policy decisions. To do that, it’s necessary to dig deep into the CloudTrail logs, as well as the compute management infrastructure.

Least privilege in the cloud

Finally, keep in mind that we have only touched on native AWS IAM access controls. There are several additional issues to be considered when mapping access permissions to resources, including indirect access (via secrets stored in Key Management Systems and Secret Stores), or application-level access. That is a discussion for another day.

As we’ve seen, enforcing least privilege in the cloud to minimize access risks that lead to data breaches or service interruption can be manually unfeasible for many organizations. New technologies are emerging to bridge this governance gap by using software to automate the monitoring, assessment and right sizing of access permissions across all identities – users, devices, applications, etc. – in order to eliminate risk.

Panorays and CSA partner to deliver visibility into SaaS and cloud providers

Panorays are partnering with the Cloud Security Alliance (CSA) to be a licensed distributor of the CSA’s Consensus Assessment Initiative Questionnaire (CAIQ).

Panorays CTO Demi Ben-Ari explains, “Panorays’ unique 360-degree rating approach offers cloud provider consumers full visibility into the security posture of their providers.

Consumers can assess their providers using CAIQ while continuously monitoring the provider’s attack surface. The combination of the questionnaire approach together with an understanding of uncovered security gaps provides customers with actionable information regarding the risk that a provider poses.”

The Panorays partnership with the Cloud Security Alliance enables companies to quickly and easily ascertain if their cloud provider complies with standard security regulations. Panorays’ customers further gain through a context-based CAIQ, customized to the relationship of Panorays and the provider so that only regulations or frameworks relevant to the relationship are asked. Onboarding third parties through CAIQ is done automatically so customers can send, track and evaluate their cloud providers.

“We’re proud to have Panorays take an active role within our cloud community. With Panorays’ dedication to assessing other cloud and SaaS providers, we can continue to build a secure ecosystem,” said Jim Reavis, CEO, Cloud Security Alliance. “CSA consumers will benefit from Panorays’ innovative approach to accelerate cloud adoption.”

The CAIQ assessments provide a set of questions for a cloud provider to ascertain its compliance with the Cloud Controls Matrix (CCM). The CCM is a baseline set of security controls that is based on accepted security standards, regulations and controls framework, such as ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum and NERC CIP.

5 considerations for building a zero trust IT environment

Zero trust isn’t a product or service, and it’s certainly not just a buzzword. Rather, it’s a particular approach to cybersecurity. It means exactly what it says – not “verify, then trust” but “never trust and always verify.”

Essentially, zero trust is about protecting data by limiting access to it. An organization will not automatically trust anyone or anything, whether inside or outside the network perimeter. Instead, the zero trust approach requires verification for every person, device, account, etc. attempting to connect to the organization’s applications or systems before granting access.

But wait. Aren’t cybersecurity systems already designed to do that? Is zero trust simply cybersecurity with some added controls?

Good question. Zero trust frameworks certainly include many technologies that are already widely used by organizations to protect their data. However, zero trust represents a clear pivot in how to think about cybersecurity defense. Rather than defending only a single, enterprise-wide perimeter, this approach moves this perimeter to every network, system, user, and devices within and outside the organization. This movement is enabled by strong identities, multi-factor authentication, trusted endpoints, network segmentation, access controls, and user attribution to compartmentalize and regulate access to sensitive data and systems.

In short, zero trust is a new way to think about cybersecurity to help organizations protect their data, their customers, and their own competitive advantage in today’s rapidly changing threat landscape.

Why now is the time for zero trust in cybersecurity

Corporate executives are feeling the pressure to protect enterprise systems and data. Investors and “data subjects” – customers and consumers – are also insisting on better data security. Security issues get even more complicated when some data and applications are on-premise and some are in the cloud, and everyone from employees to contractors and partners are accessing those applications using a variety of devices from multiple locations. At the same time, government and industry regulations are ramping up the requirements to secure important data, and zero trust can help demonstrate compliance with these regulations.

Zero trust cybersecurity technologies

Fortunately, the technology supporting zero trust is advancing rapidly, making the approach more practical to deploy today. There is no single approach for implementing a zero trust cybersecurity framework, and neither is there any single technology. Rather, technology pieces fit together to ensure that only securely authenticated users and devices have access to target applications and data.

For example, access is granted based on the principle of “least privilege” ─ providing users with only the data they need to do their job, when they are doing it. This includes implementing expiring privileges and one-time-use credentials that are revoked automatically after access is not required. In addition, traffic is inspected and logged on a continuous basis and access is confined to perimeters to help prevent the unauthorized lateral movement of data across systems and networks.

A zero trust framework uses a number of security technologies to increase the granularity of access to sensitive data and systems. Examples include identity and access management (IAM); role-based access control (RBAC); network access control (NAC), multi-factor authentication (MFA), encryption, policy enforcement engines, policy orchestration, logging, analytics, and scoring and file system permissions.

Equally important, technology standards and protocols are available to support the zero trust approach. The Cloud Security Alliance (CSA) has developed a security framework called a software-defined perimeter (SDP) that has been used in some zero trust implementations. The Internet Engineering Task Force (IETF) made its contribution to zero trust security models by sanctioning the Host Identity Protocol (HIP), which represents a new security networking layer within the OSI stack. Numerous vendors are building on these technical advancements to bring zero trust solutions to market.

Based on these technologies, standards and protocols, organizations can use three different approaches to implementing zero trust security:

1. Network micro-segmentation, with networks carved into small granular nodes all the way down to a single machine or application. Security protocols and service delivery models are designed for each unique segment.
2. SDP, based on a need-to-know strategy in which device posture and identity are verified before access to application infrastructure is granted.
3. Zero trust proxies that function as a relay between client and server, helping to prevent an attacker from invading a private network.

Which approach is best for a given situation depends on what application(s) are being secured, what infrastructure currently exists, whether the implementation is greenfield or encompassing legacy environments, and other factors.

Adopting zero trust in IT: Five steps for building a zero trust environment

Building a zero trust framework doesn’t necessarily mean a complete technology transformation. By using this step-by-step approach, organizations can proceed in a controlled, iterative fashion, helping to ensure the best results with a minimum of disruption to users and operations.

1. Define the protected surface – With zero trust, you don’t focus on your attack surface but only on your protect surface ─ the critical data, applications, assets and services (DAAS) most valuable for your company. Examples of a protect surface include credit card information, protected health information (PHI), personally identifiable information (PII), intellectual property (IP), applications (off-the-shelf or custom software); assets such as SCADA controls, point-of-sale terminals, medical equipment, manufacturing assets and IoT devices; as well as services like DNS, DHCP and Active Directory.

Once the protect surface is defined, you can move your controls as close as possible to it, enabling you to create a micro-perimeter (or compartmentalized micro-perimeters) with policy statements that are limited, precise and understandable.

2. Map transaction flows – The way traffic moves across a network determines how it should be protected. Thus, you need to gain contextual insight around the interdependencies of your DAAS. Documenting how specific resources interact allows you to properly enforce controls and provides valuable context to help ensure optimal cybersecurity with minimal disruption to users and business operations.

3. Architect your zero trust IT network – Zero trust networks are completely customized, not derived from a single, universal design. Instead, the architecture is constructed around the protect surface. Once you’ve defined the protect surface and mapped flows relative to the needs of your business, you can map out the zero trust architecture, starting with a next-generation firewall. The next-generation firewall acts as a segmentation gateway, creating a micro-perimeter around the protect surface. With a segmentation gateway, you can enforce additional layers of inspection and access control, all the way to Layer 7, for anything trying to access resources within the protect surface.

4. Create your zero trust security policies – Once the network is architected, you will need to create zero trust policies determining access. You need to know who your users are, what applications they need to access, why they need access, how they tend to connect to those applications, and what controls can be used to secure that access.

With this level of granular policy enforcement, you can be sure that only known allowed traffic or legitimate application communication is permitted.

5. Monitor and maintain networks – This final step includes reviewing all logs, internal and external, and focusing on the operational aspects of zero trust. Since zero trust is an iterative process, inspecting and logging all traffic will provide valuable insights into how to improve the network over time.

Additional considerations and best practices

For organizations considering undertaking a zero trust security model, here are some best practices to help ensure success:

  • Make sure you have the right strategy before choosing an architecture or technology. Zero trust is data-centric, so it is important to think about where that data is, who needs to have access to it, and what approach can be used to secure it. Forrester suggests dividing data into three categories ─ Public, Internal and Confidential ─ with “chunks” of data that have their own micro-perimeters.
  • Start small to gain experience. The scale and scope for implementing zero trust for an entire enterprise can be overwhelming. As an example, it took Google seven years to implement its own project known as BeyondCorp.
  • Consider the user experience. A zero trust framework doesn’t have to be disruptive to employees’ normal work processes, even though they (and their devices) are being scrutinized for access verification. Some of those processes can be in the background where users don’t see them at all.
  • Implement strong measures for user and device authentication. The very foundation of zero trust is that no one and no device can be trusted until it is thoroughly verified as having a right to access a resource. Thus, an enterprise-wide IAM system based on strong identities, rigorous authentication and non-persistent permissions is a key building block for a zero trust framework.
  • Incorporate a zero trust framework into digital transformation projects. When you redesign work processes, you can also transform your security model.

There’s never been a better time than now to adopt zero trust security models. The technologies have matured, the protocols and standards are set, and the need for a new approach to security cannot be ignored.

CSA SECtember: A new global event dedicated to the intersection of cloud and cybersecurity

The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications and best practices to help ensure a secure cloud computing environment, announced it is changing the way the cloud and cybersecurity industry meets with the launch of SECtember, a signature event focused on educating the industry on key issues and trends faced in cloud and cybersecurity.

Held in CSA’s home city of Seattle among the giants of cloud computing and the headquarters of several leaders within their respective industries, SECtember will feature in-depth training, networking opportunities and interactive sessions with global experts. The inaugural SECtember will be held Sept. 14-17, 2020, at the Sheraton Grand Seattle.

“In 2009, CSA began defining cloud security before most organizations were in the cloud. In 2020, cloud computing is now the primary mode of computing around the world and is also the foundation for cybersecurity writ large and the means by which we secure all forms of computing, such as the Internet of Things.

Seattle is well-established around the world as the center of cloud computing, and with the introduction of SECtember, it can be the focal point of cybersecurity, as well.

CSA is making a permanent commitment to bring this signature event to our home city on an annual basis, which is rapidly becoming a magnet for companies in the technology and cloud space,” said Jim Reavis, CEO and co-founder, Cloud Security Alliance.

“SECtember will bring together thought leaders from five continents to provide a global perspective on strategic cloud and cybersecurity issues and will provide state-of-the-art educational activities. We have a great deal of pride in Seattle, and while the topic of our conference is serious, we guarantee that the event will also be fun,” he added.

The annual event will offer attendees an enhanced roster of training, including courses covering the Certificate of Cloud Security Knowledge (CCSK) Foundation (1 day), CCSK Plus (2-day) along with CCSK Plus AWS and Azure, Cloud Governance & Compliance (1 day), Advanced Cloud Security Practitioner (2-day), and Certificate of Cloud Auditing Knowledge (2-day), as well as other training sessions currently in development.

The event will also feature on-site executive briefings that leverage access to Seattle’s tech-dense business community and area CSA enterprise members. Targeted sessions are also being created to offer a chance for various industry groups to meet and learn from one another.

Speakers announced for CSA Summit at RSA Conference 2020

The Cloud Security Alliance (CSA) announced its headlining speakers for the 11th annual CSA Summit at RSA Conference 2020 (Feb. 24, San Francisco).

CSA Summit RSA Conference 2020

Phil Venables, Board Director and Senior Advisor (Risk and Cybersecurity) for Goldman Sachs, will be joining National Security Agency and Central Security Service General Counsel Glenn Gerstell, In-Q-Tel Chief Information Security Officer and industry legend Dan Geer, and Intuit Information Security’s Directory of Adversary Management and Threat Intelligence Shannon Lietz as top speakers for the event.

“2019 has been a milestone year for cloud computing in every respect. Massive expansion in cloud adoption and breakthroughs in cloud security solutions have been tempered by record cloud data breaches and punitive fines for privacy regulation violations. The good news is that there is an extensive body of knowledge to successfully navigate the security and privacy challenges for the decade ahead. For the forthcoming CSA Summit 2020, we have doubled down on the number of sessions presented by enterprise end users and CISOs as they are truly the stewards of our industry. The speakers we have assembled are among the most admired leaders within cybersecurity, and we are very fortunate to have them all in one room on this special day. This event will set the tone for 2020 and provide a roadmap for where we intend to lead the industry in the years ahead,” said CSA Co-founder and CEO Jim Reavis.

Venables will share his expertise and insight gleaned from his years of leading Goldman Sachs’ Information Security, Technology Risk, Technology Governance and Business Continuity programs. As a senior advisor, he supports the firm’s executive leadership and client franchise on cybersecurity, technology risk, digital business risk, and operational resilience. Additionally, he spearheads the firm’s work with industry associations and initiatives to reduce systemic risk and serves as a member of the Firmwide Enterprise Risk Committee, Firmwide Technology Risk Committee, and Global Business Resilience Committee.

Attendees also will learn from thought leaders from multi-national enterprises, government, cloud providers and the information security industry, who will share best practices in cloud privacy and security. Among them will be some of the cloud industry’s most prominent enterprise leaders and experts:

Dan Geer, CISO of In-Q-Tel. Geer is the creator of the Index of Cyber Security and the Cyber Security Decision Maker, as well as a co-founder of SecurityMetrics.Org. His 1998 speech, “Risk Management Is Where the Money Is,” changed the focus of security, and he was the first to call for the eclipse of authentication by accountability in 2002. Geer is a widely noted author in scientific journals and a co-author of several books on risk management and information security, including “Cyberinsecurity: The Cost of Monopoly,” “Economics & Strategies of Data Security,” and “Cybersecurity & National Policy.”

Glenn Gerstell, General Counsel, National Security Agency (NSA) and Central Security Service. Gerstell was appointed in August 2015 as the General Counsel of the National Security Agency and Central Security Service. Prior to joining NSA, Gerstell practiced law for almost 40 years at Milbank, Tweed, Hadley & McCloy LLP, where he served as the managing partner of the firm’s Washington, D.C., Singapore, and Hong Kong offices. Earlier in his career, he was an Adjunct Law Professor at the Georgetown University School of Law and New York Law School. He has served on the President’s National Infrastructure Advisory Council, which reports to the President and the Secretary of Homeland Security on security threats to the nation’s infrastructure, as well as on the District of Columbia Homeland Security Commission.

Shannon Lietz, Director Adversary Management and Threat Intelligence for Intuit Information Security. Lietz is an award-winning innovator with more than 20 years of experience pursuing advanced security defenses and next-generation security solutions. She is currently the DevSecOps Leader for Intuit, where she is responsible for setting and driving the company’s security engineering strategy and cloud security support for product innovation. She is passionate about leading the charge for security transformation and change management in large environments, leveraging Agile and Rugged principles.

Panels and presentations will focus on privacy and information security with an eye to artificial intelligence, quantum supremacy, blockchain, and fog computing.

Rich Mogull, CCSK Authorized Instructor and a prominent industry analyst and sought-after speaker, will be teaching the Certificate of Cloud Security Knowledge (CCSK) Plus training course on Feb. 23-24. The class will provide students a comprehensive review of cloud security fundamentals, prepare them to take the CCSK v4 certificate exam and guide them through six hands-on labs that tie cloud security best practices to real world applications.

Cloud Security Alliance launches credentials for auditing cloud computing systems

The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment, announced the Certificate of Cloud Auditing Knowledge (CCAK), the only credential for industry professionals that demonstrates expertise in the essential principles of auditing cloud computing systems.

Set to be released in the second half of 2020, the CCAK aims to solve the current industry knowledge gap for IT audit and security professionals trained and certified for traditional on-premise IT auditing and assurance.

Designed to provide CISOs, security and compliance managers, internal and external auditors, and practitioners of tomorrow with the proven skillset to address the specific concerns that arise from the use of various forms of cloud services, the CCAK will provide a common baseline of expertise and shared nomenclature to ensure that IT auditors and other related stakeholders are communicating appropriately and accurately regarding the effectiveness of cloud security controls.

With its focus on cloud computing, the CCAK differs from traditional IT audit certification programs, which have many excellent elements, but were not developed with an understanding of cloud computing and its many nuances.

An audited organization using cloud computing, for instance, will have a very different approach to satisfying control objectives, and a cloud tenant will certainly not have the same administrative access as in a legacy IT system and will employ a wide range of security controls that will be foreign to an audit and assurance professional grounded in traditional IT audit practices.

“Cloud computing represents a radical departure from legacy IT in virtually every respect. The new technology architecture, the nature of how cloud is provisioned, and the new shared responsibility model means that IT audits must be significantly altered to provide assurance to stakeholders that their cloud adoption is secure,” said Jim Reavis, co-founder and CEO, Cloud Security Alliance.

“Because CSA already has developed the most widely adopted cloud security audit criteria and organizational certification, we are uniquely positioned to lead efforts to ensure industry professionals have the requisite skill set for auditing cloud environments.”

The CCAK’s holistic body of knowledge will be composed of the CSA’s Cloud Controls Matrix (CCM), the fundamental framework of cloud control objectives; its companion Consensus Assessments Initiative Questionnaire (CAIQ), the primary means for assessing a cloud provider’s adherence to CCM; and the Security, Trust, Assurance & Risk (STAR) program, the global leader in cloud security audits and self-assessments, in addition to new material.

For more than 10 years, CSA has led the development of the trusted cloud ecosystem, which notably includes the STAR program and the Certificate of Cloud Security Knowledge (CCSK), the gold standard for measuring professional competency in cloud security.

The CCAK and the CCSK will complement one another in that the CCSK provides the knowledge that enables an expert to secure cloud systems that will, in turn, be successfully scrutinized by an expert holding the CCAK. In many cases, an industry professional will be well served by obtaining both certificates.

Because the CCAK is intended to create a common cloud audit understanding, it’s expected to become a mandatory requirement for IT auditors and highly recommended for IT managers and professionals, especially governance, risk management, compliance, and vendor/supply chain management.

Several opportunities exist for those looking to participate in the CCAK’s development. Individuals can volunteer to provide subject matter expertise or peer review, while organizations with a vested interest in cloud security can become a founding sponsor.