The majority of applications contain at least one security flaw and fixing those flaws typically takes months, a Veracode report reveals.
This year’s analysis of 130,000 applications found that it takes about six months for teams to close half the security flaws they find.
The report also uncovered some best practices to significantly improve these fix rates. There are some factors that teams have a lot of control over, and those they have very little control over categorizing them as “nature vs. nurture”.
Within the “nature” side, factors such as the size of the application and organization as well as security debt were considered, while the “nurture” side accounts for actions such as scanning frequency, cadence, and scanning via APIs.
Fixing security flaws: Nature or nurture?
The report revealed that addressing issues with modern DevSecOps practices results in higher flaw remediation rates. For example, using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API all make a difference in reducing time to fix security defects, even in apps with a less than ideal “nature.”
“The goal of software security isn’t to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner,” said Chris Eng, Chief Research Officer at Veracode.
“Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools.”
Other key findings
Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, 30% of applications have more flaws in their open source libraries than in the code written in-house.
The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in a prior research.
This year’s report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. Older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.
After five months in beta, the GitHub Code Scanning security feature has been made generally available to all users: for free for public repositories, as a paid option for private ones.
“So much of the world’s development happens on GitHub that security is not just an opportunity for us, but our responsibility. To secure software at scale, we need to make a base-level impact that can drive the most change; and that starts with the code,” Grey Baker, GitHub’s Senior Director of Product Management, told Help Net Security.
“Everything we’ve built previously was about responding to security incidents (dependency scanning, secret scanning, Dependabot) — reacting in real time, quickly. Our future state is about fundamentally preventing vulnerabilities from ever happening, by moving security core into the developer workflow.”
GitHub Code Scanning
The Code Scanning feature is powered by CodeQL, a powerful static analysis engine built by Semmle, which was acquired by GitHub in September 2019.
“We want developers to be able to use their tools of choice, for any of their projects on GitHub, all within the native GitHub experience they love. We’ve partnered with more than a dozen open source and commercial security vendors to date and we’ll continue to integrate code scanning with other third-party vendors through GitHub Actions and Apps,” Baker noted.
“The major value add here is that developers can work, and stay within, the code development ecosystem in which they’re most accustomed to while using their preferred scanning tools,” explained James Brotsos, Senior Solutions Engineer at Checkmarx.
“GitHub is an immensely popular resource for developers, so having something that ensures the security of code without hindering agility is critical. Our ability to automate SAST and SCA scans directly within GitHub repos simplifies workflows and removes tedious steps for the development cycle that can traditionally stand in the way of achieving DevSecOps.”
Checkmarx’s SCA (software composition analysis) help developers discover and remedy vulnerabilities within open source components that are being included into the application and prioritizing them accordingly based on severity. Checkmarx SAST (static application security testing) scans proprietary code bases – even uncompiled – to detect new and existing vulnerabilities.
“This is all done in an automated fashion, so as soon as a pull request takes place, a scan is triggered, and results are embedded directly into GitHub. Together, these integrations paint a holistic picture of the entire application’s security posture to ensure all potential gaps are accounted for,” Brotsos added.
Leon Juranic, CTO at DefenseCode, said that they are very excited by this initiative, as it provides access to security analysis to over 50+ million Github users.
“Having the security analysis results displayed as code scanning alerts in GitHub provides an convenient way to triage and prioritize fixes, a process that could be cumbersome usually requiring scrolling through many pages of exported reports, going back and forth between your code and the reported results, or reviewing them in dashboards provided by the security tool. The ease of use now means you can initiate scans, view, fix, and close alerts for potential vulnerabilities in your project’s code in an environment that is already familiar and where most of your other workflows are done,” he noted.
A week ago, GitHub also announced additional support for container scanning and standards and configuration scanning for infrastructure as code, with integration by 42Crunch, Accurics, Bridgecrew, Snyk, Aqua Security, and Anchore.
The benefits and future plans
“We expect code scanning to prevent thousands of vulnerabilities from ever existing, by catching them at code review time. We envisage a world with fewer software vulnerabilities because security review is an automated part of the developer workflow,” Baker explained.
“During the code scanning beta, developers fixed 72% of the security errors found by CodeQL and reported in the code scanning pull request experience. Achieving such a high fix rate is the result of years of research, as well as an integration that makes it easy to understand each result.”
Over 12,000 repositories tried code scanning during the beta, and another 7,000 have enabled it since it became generally available, he says, and the reception has been really positive, with many highlighting valuable security finds.
“We’ll continue to iterate and focus on feedback from the community, including around access control and permissions, which are of high priority to our users,” he concluded.
20% of security professionals described their organizations’ DevSecOps practices as “mature”, while 62% said they are improving practices and 18% as “immature”, a WhiteSource report finds.
The survey gathered responses from over 560 developers and application security professionals in North America and Western Europe about the state of DevSecOps implementation in their organizations.
Reaching full DevSecOps maturity
- In order to meet short deployment cycles, 73% of security professionals and developers feel forced to compromise on security.
- AppSec tools are purchased to ‘check the box’, disregarding developers’ needs and processes, resulting in tools being purchased but not used. Developers don’t fully use the tools purchased by the security team. The more the mature an organization is in terms of its DevSecOps practices, the more AppSec tools they use.
- There is a significant “AppSec knowledge and skills gaps” challenge that is largely neglected by organizations. While 60% of security professionals say they have had an AppSec program in place for at least a year, only 37% of developers surveyed reported that they were not aware of an AppSec program running for longer than a year inside their organization.
- Security professionals’ top challenge is prioritization, but organizations lack the standardized processes to streamline vulnerability prioritization.
“Survey results show that while most security professionals and developers believe that their organizations are in the process of adopting DevSecOps, most organizations still have a way to go, especially when it comes to breaking down the silos separating development at security teams,” said Rami Sass, CEO, WhiteSource.
“Full DevSecOps maturity requires organizations to implement DevSecOps across the board. Processes, tools, and culture need to evolve in order to break down the traditional silos and ensure that all teams share ownership of both security and agility.”
DevSecOps tactics and tools are dramatically changing the way organizations bring their applications to fruition. Having a mindset that security must be incorporated into every stage of the software development lifecycle – and that everyone is responsible for security – can reduce the total cost of software development and ensure faster release of secure applications.
A common goal of any security strategy is to resolve issues quickly and safely before they can be exploited for a breach resulting in data loss. Application developers are not security specialists, and likely do not have the knowledge and skills to find and fix security issues in a timely manner. This is where security automation can help.
Security automation uses tools to continuously scan, detect, investigate, and remediate threats and vulnerabilities in code or the application environment, with or without human intervention. Tools scale the process of incorporating security into the DevSecOps process without requiring an increase in human skills or resources. They do this by automatically putting up safety rails around the issue whenever they find something that is a clear and obvious violation of security policy.
The AWS cloud platform is ripe for security automation
Amazon claims to have more than a million customers on its cloud computing platform, mostly small and mid-size companies but also enterprise-scale users. Regardless of customer size, Amazon has always had a model of shared responsibility for security.
Amazon commits to securing every component under its control. Customers, however, are responsible for securing what they control, which includes configurations, code, applications, and most importantly, data. This leaves a lot of opportunity for misconfigurations, insecure code, vulnerable APIs, and poorly secured data that can all lead to a data breach.
A common security problem in AWS is an open S3 storage bucket where data is publicly readable on the Internet. Despite the default configuration of S3 buckets being private, it’s fairly easy for developers to change policies to be open and for that permission change to apply in a nested fashion. A security automation tool should be able to find and identify this insecure configuration and simply disable public access to the resource without requiring human intervention.
Amazon added such tools in 2017 and again in 2018, yet we keep seeing headlines of companies whose data has been breached due to open S3 buckets. The security tool should communicate to the appropriate teams, but in many situations based on the sensitive contents of the data, the tool should also auto-remediate the misconfigured access policies. Teams that embrace security automation can also use this type of alerting and auto-remediation to become more aware of issues in their code or environment and, hopefully, head them off before they occur again.
What else can be auto-remediated? There are hundreds of vulnerabilities in AWS that can and should be fixed without human intervention. Here are just a few examples:
- AWS CloudTrail data-at-rest encryption levels
- AWS CloudFront Distribution logging access control
- AWS Elastic Block Store access control
- AWS S3 bucket access control
- AWS S3 bucket ransomware exposure
- AWS Simple Queue Service exposure
Essential features of a security automation tool
There are important categories of features of a security automation product for AWS. One category addresses data-in-motion with auto-remediation of API and queuing authentication and encryption. The other addresses data-at-rest with auto-remediation of database and storage encryption and backup. Security monitoring and enforcement are needed to automatically protect developers from making mistakes in how they are moving or storing data.
Here are four essential features to look for in a security automation tool.
1. Continuous discovery of shadow APIs within cloud, mobile, and web apps
APIs enable machine-to-machine data retrieval, essentially removing barriers and accelerating access to data. There is hardly a modern application today that doesn’t provide an API to integrate with other applications and data sources. A developer only needs to write a few lines of code to create an API. A shadow API is one that operates outside the purview of the security team. It’s a challenge to enforce security on code that is known only to the programmer. Thus, a security automation tool must have the ability to continuously scan for and discover APIs that may pose a security threat to prevent a data breach.
2. Full-stack security analysis of mobile and modern web apps
Before data gets taken up into the AWS cloud, it often starts at the client layer with a web or mobile app. Protecting user privacy and securing sensitive data is a continuous effort that requires vulnerability analysis from mobile to web to backend cloud services. Modern attackers often focus on exploiting the client layer to highjack user sessions, embedded passwords, and toxic tokens left inside mobile apps or single-page applications.
3. Automation fully integrated into the CI/CD pipeline with support for auto-remediation
Most vulnerability assessment tools integrate into the CI/CD pipeline by reporting what they find to systems such as Jira, Bugzilla and Jenkins. This is table stakes for assessment tools. What’s more valuable, however, is to include auto-remediation of the issues in the CI/CD pipeline. Instead of waiting for a human to make and verify the fix for the vulnerability, the tool does it automatically and reports the results to the ticketing system. This frees developers from having to spend time resolving common issues.
4. Automated vulnerability hacking toolkits for scheduled pre-production assessments
Companies often hire white hat hackers to do, basically, a moment-in-time penetration test in their pre-production environment. A more modern approach is to deploy a toolkit that continuously performs the same hacking activities. Not only is using such a toolkit much more cost effective, but it also works non-stop to find and fix vulnerabilities.
When auto-remediation may not be appropriate
Automatic remediation of some security issues isn’t always appropriate. Rather, it’s better that the tool simply discovers the issue and raises an alert to allow a person to decide how to resolve it. For example, auto-remediation is generally unsuitable when an encryption key is required, such as for a database, and for configurations that require user interactions, such as selecting a VPC or an IAM rule. It’s also not appropriate when the fix requires changes to existing code logic within the customer’s proprietary code base.
Nonetheless, some tools do aid in dealing with insecure code. One helpful feature that isn’t found in all security automation tools is the recognition of faulty code and recommendations on how to fix it with secure code. Seeing the recommended code fix in the pre-production stage helps resolve issues quickly without wasting time doing research on why the code is troublesome. Developers get to focus on their applications while security teams ensure continuous security validation.
AWS is a complex environment with many opportunities for misconfigurations and other issues that can lead to a data breach. Security automation with auto-remediation takes pressure off developers to find and fix a wide variety of vulnerabilities in code and configurations to help keep their organizations’ data safe.
Nearly half of organizations regularly and knowingly ship vulnerable code despite using AppSec tools, according to Veracode.
Among the top reasons cited for pushing vulnerable code were pressure to meet release deadlines (54%) and finding vulnerabilities too late in the software development lifecycle (45%).
Respondents said that the lack of developer knowledge to mitigate issues and lack of integration between AppSec tools were two of the top challenges they face with implementing DevSecOps. However, nearly nine of ten companies said they would invest further in AppSec this year.
The software development landscape is evolving
The research sheds light on how AppSec practices and tools are intersecting with emerging development methods and creating new priorities such as reducing open source risk and API testing.
“The software development landscape today is evolving at light speed. Microservices-driven architecture, containers, and cloud-native applications are shifting the dynamics of how developers build, test, and deploy code. Without better testing, integration, and regular developer training, organizations will put themselves at jeopardy for a significant breach,” said Chris Wysopal, CTO at Veracode.
- 60% of organizations report having production applications exploited by OWASP Top 10 vulnerabilities in the past 12 months. Similarly, seven in 10 applications have a security flaw in an open source library on initial scan.
- Developers’ lack of knowledge on how to mitigate issues is the biggest AppSec challenge – 53% of organizations only provide security training for developers once a year or less. Data shows that the top 1% of applications with the highest scan frequency carry about five times less security debt, or unresolved flaws, than the least frequently scanned applications, which means frequent scanning helps developers find and fix flaws to significantly lower their organization’s risk.
- 43% cited DevOps integration as the most important aspect to improving their AppSec program.
- 84% report challenges due to too many AppSec tools, making DevOps integration difficult. 43% of companies report that they have between 11-20 AppSec tools in use, while 22% said they use between 21-50.
According to ESG, the most effective AppSec programs report the following as some of the critical components of their program:
- Application security is highly integrated into the CI/CD toolchain
- Ongoing, customized AppSec training for developers
- Tracking continuous improvement metrics within individual development teams
- AppSec best practices are being shared by development managers
- Using analytics to track progress of AppSec programs and to provide data to management
Cloud breaches will likely increase in velocity and scale, and highlights steps that can be taken to mitigate them, according to Accurics.
“While the adoption of cloud native infrastructure such as containers, serverless, and servicemesh is fueling innovation, misconfigurations are becoming commonplace and creating serious risk exposure for organizations,” said Om Moolchandani, CTO, Accurics. “As cloud infrastructure becomes increasingly programmable, we believe that the most effective defense is to codify security into development pipelines and enforce it throughout the lifecycle of the infrastructure. The receptiveness of the developer community toward assuming more security responsibility has been encouraging and a step in the right direction.”
Key report findings
Misconfigured cloud storage services are commonplace in a stunning 93% of the cloud deployments analyzed, and most also have at least one network exposure where a security group is left wide open. These issues will likely increase in both velocity and scale—and they’ve already contributed to more than 200 breaches over the past two years.
One emerging problem area is that despite the broad availability of tools like HashiCorp Vault and AWS Key Management Service (KMS), hardcoded private keys turned up in 72% of the deployments analyzed. Specifically, unprotected credentials stored in container configuration files were found in half of these deployments, which is an issue given that 84% of organizations use containers.
Going one level deeper, 41% of the organizations had high privileges associated with the hardcoded keys and were used to provision compute resources; any breach involving these would expose all associated resources. Hardcoded keys have contributed to a number of cloud breaches.
Network exposures resulting from misconfigured routing rules posed the greatest risk to all organizations. In 100% of deployments, an altered routing rule exposed a private subnet containing sensitive resources, such as databases, to the Internet.
Automated detection of risks paired with a manual approach to resolution is creating alert fatigue, and only 6% of issues are being addressed. An emerging practice known as Remediation as Code, in which the code to resolve the issue is automatically generated, is enabling organizations to address 80% of risks.
Automated threat modeling is also needed to determine whether changes such as privilege increases, and route changes introduce breach paths in a cloud deployment. As organizations embrace Infrastructure as Code (IaC) to define and manage cloud-native infrastructure, codifying security into development pipelines becomes possible and can significantly reduce the attack surface before cloud infrastructure is provisioned.
The new report makes the case for establishing the IaC as a baseline to maintain risk posture after cloud infrastructure is provisioned. Continuous assessment of new cloud resources and configuration changes against the baseline will surface new risks. If a change is legitimate, update the IaC to reflect the change; if it’s not, redeploy the cloud from the baseline.
The COVID-19 pandemic and its impact on the world has made a growing number of people realize how many of our everyday activities depend on software.
We increasingly work, educate ourselves, play, communicate with others, consume entertainment, go shopping and do many other things in the digital world, and we depend on software and online services/apps to make that possible. Software is now everywhere and embedded within just about everything we touch.
The pandemic has also significantly accelerated companies’ digital transformation efforts and the proliferation of new software, and has stressed two undeniable facts:
- Software security is more necessary than ever before
- Automated application testing solutions that support developer workflows are the only way to achieve software security at such an intense pace and scale
Problems to solve when aiming for sofware security
When we talk about software security, we talk about proactively making an effort to create software that is nearly impenetrable to cyberattacks. We talk about working with that goal in mind during each phase of the software development lifecycle (SDLC) and finding and fixing security vulnerabilities before they have a chance of becoming a problem.
At a surface level, it sounds like a no-brainer, but there are a number of challenges organizations face when it comes to putting the idea in practice in the form of a true DevSecOps program.
Many traditional software security approaches are also falling short, either due to a lack of SDLC and developer workflow integration, a failure to cover all stages of the SDLC holistically, a disregard of developer needs, or a lack of testing automation.
Embedding security into DevOps
Slowly but surely, DevOps has become the software delivery methodology of choice for many organizations.
By aligning all the people/departments involved in software development and delivery and empowering them to work in tandem, organizations that choose the DevOps culture and implement it well are able to deliver high quality software faster. And those that choose to embed security into DevOps (DevSecOps), make the whole proposition less risky for everybody involved, including the customer.
But how to do it so that everybody involved is enthusiastically on board and satisfied? The answer is: make security testing intrinsic with the software development and delivery processes by integrating it into existing pipelines, make it automated, and embed AppSec training and awareness on top of all developer operations to ensure continuous education.
With its Software Security Platform, which merges static application security testing (SAST), software composition analysis (SCA), interactive application security testing (IAST) and in-context developer awareness and training (aka “Codebashing”), Checkmarx has all those requirements covered.
In fact, the company’s platform has recently been named by Gartner as the “best fit” for DevOps, and the company as a 2020 Gartner Magic Quadrant Leader for Application Security Testing for the third year in a row.
To them, that’s no surprise, as they are constantly working to be on the bleeding edge of software security by constantly innovating their fleet of AST solutions.
Matt Rose, Checkmarx’s Global Director of Application Security Strategy, says that they’ve seen a lot of changes in the industry throughout the years, but that their product was really designed ahead of its time and fits “unbelievably well” with the modern DevOps processes.
Not one of the aforementioned facts has gone unnoticed by private-equity firm Hellman & Friedman, which, in the midst of the COVID-19 pandemic, finalized a $1.15 billion acquisition of Checkmarx – the largest AppSec vendor acquisition to date.
The acquisition cements the company’s place in the industry as somebody that is not going away, Rose noted, and the investment will allow them to continue the forward momentum and prepare for the future in terms of providing the best application security testing platform in the world.
Developer-focused security and automation
There are a few recent additions to Checkmarx’s Software Security Platform that solve industry challenges:
- How to identify vulnerable open source components in applications and quickly remediate vulnerabilities, and
- How to simplify the automation of application security testing to reduce the friction and latency between developer and security teams.
The former comes in the form of a new SaaS-based software composition analysis (SCA) solution (CxSCA) that can be used as part of the platform or independently of it. Featuring a unique “exploitable path” capability, CxSCA leverages Checkmarx’s leading source analysis technologies to identify vulnerable open source components that are in the execution path of the vulnerability, allowing AppSec teams and developers to focus their remediation efforts on the greatest risks. This dramatically reduces time spent from the point of vulnerability detection to triage and increases developers’ productivity.
The latter is solved by Checkmarx’s unique automation capabilities via an orchestration module (CxFlow) for the platform. With this, Checkmarx enables automated scanning earlier in the code management process by integrating directly into source code management systems (think GitHub, GitLab, BitBucket, Azure DevOps), as well as providing extensive integrations with leading CI/CD tools. With developer and AppSec teams being asked to build and deploy software – that is secure – faster than ever before, the ability to automate testing within developers’ work environment is critical.
“A common way of thinking is that CI orchestration is the best place to automate application security testing capabilities. However, multiple implementation barriers – ranging from lengthy set up times to inflexible CI processes – usually accompany this approach,” Rose noted.
“With Checkmarx, we can automate the testing of the software earlier by focusing on the source code management systems. In doing so, when a developer pushes code into the source code management system when they’re done, we listen when that push or pull request is made and then automate the scanning all the way through tickets being created. Developers really benefit from this as it simplifies AST automation within DevOps, without interrupting their workflow.”
Looking ahead, Checkmarx continues to advance its offering to address the needed security for software and development trends like cloud native, microservices and containers. “DevOps is still evolving, a lot of the tooling is still evolving, and our capabilities will evolve with them,” Rose said.
Securing the application prior to release
There’s no doubt about it (and customers demand it): application security testing technologies must be automated to be effective in the modern software development arena, and Checkmarx is setting the standard. Their customers back this claim, with reviews on Gartner Peer Insights including:
- “The Checkmarx products are invaluable to our organization. They are a key element of our AppSec strategy and implementation.”
- “If your company’s developer workforce is not used to incorporating security standards into their builds, the Checkmarx stack of tools will do wonders for you in terms of integrating into your existing pipelines and providing the education via Codebashing that your developers will need.”
Other important requirements for effective AppSec testing tools include the ability to be fitted into developers’ toolchains, to cover all phases of SDLC (from coding through check-in and CI), to provide rapid feedback, and to be flexible, i.e., to allow for many different ways of implementing the technology based on the way an organization is developing software and to offer different deployment options.
Checkmarx offers all that to help organizations achieve the ultimate goal: flagging potential security vulnerabilities and risk early on, when remediation is considerably easier.
Despite the rhetoric around DevSecOps, security remains an afterthought when organizations are building software. Meanwhile, the latest Verizon threat report identified that web application attacks have doubled, validating that cloud-based data is under attack. The surge in web app security breaches in 2019 further solidifies that we are a long way from delivering on the DevSecOps vision.
With the rush to embrace digital services, organizations remain focused on the speed of release rather than on the quality of services. To accelerate the pace of digital transformation, security must be a fundamental part of software development.
To achieve DevSecOps, enterprises need to develop code faster and identify vulnerabilities sooner. Otherwise, you run the risk of DevOps, simply creating software with vulnerabilities more quickly.
So, how can organizations make DevSecOps a reality? It’s all about embedding security within all aspects of your software development process rather than having it as a clunky bolt-on at the end. Here are four recommendations on how to make that happen.
1. Security needs to shift earlier in the SDLC
Security must be part of each stage of your software development lifecycle (SDLC). Product managers need to ensure that requirements, epics, and user stories have security acceptance criteria. Technical architects need to ensure that designs are reviewed from a security perspective (ideally by a security expert). Project managers and scrum masters need to ensure that security is factored into estimates. Your definition-of-done needs to include security criteria and you must review every part of your SDLC and ensure security is being considered.
To do this requires a mindset shift from the obsession with software delivery speed to a focus on quality. Too many teams measure success only in terms of velocity and time-to-market; you need to have quality and security measures. Developers are still reluctant to think about security and must be reminded to consider the quality of the code they deliver. With the current focus on speed, too often, software is released that is not stable, has vulnerabilities, and is not ready for widespread adoption. The recent failures with Zoom are a great example of the rush to deliver innovative services with little thought about security vulnerabilities.
2. Break down the “security” monolith
Security is a broad area with different security issues such as authentication, access control, confidentiality, integrity, non-repudiation, and more. Different threat models range from simply making sure that an employee doesn’t accidentally do something they are not supposed to, to trying to protect data from state-sponsored cybercrime professionals. A single approach can’t possibly address all aspects of security or do it efficiently, yet that’s precisely what most teams try to do.
Instead, teams need to breakdown the security monolith into what it means to them and what matters to their users, e.g., keeping customers’ data confidential, ensuring customers only get access to the features they’ve paid for. They also need to define the threat model (i.e., the types of attacks) they need to defend against. If your product is deployed inside a corporate file, then denial-of-service attacks are unlikely to be an issue, but password theft attacks probably are.
3. Simple threat models have simple solutions
One of the security myths is that only a small number of highly specialized experts can “do security.” Now, this is true when it comes to designing architectures to protect data in distributed systems against highly skilled and determined attackers. But it’s absolutely not true if you’re just trying to ensure that your latest changes haven’t broken your existing authentication mechanism or that your authentication can’t be broken by random script kiddies.
Once you’ve broken down the “security” monolith, you can start addressing these different security issues in different ways. For example, a large number of threat models can be addressed using standard static and dynamic analysis tools. Sure, these tools won’t fix a flawed architecture or prevent attacks from highly skilled determined attackers, but they should deal with over 90 percent of threat models.
This is key to embedding security across your whole SDLC because it means that you don’t need an army of security specialists as most security issues can be dealt with by tools, developers, and testers. You only need your security experts for those big architectural reviews and periodic audits.
4. Security is everyone’s responsibility
Once you’ve broken down the “security” monolith, it’s much easier to get everyone involved in security, because you’re not asking everyone to become a security consultant, just learn enough to deal with over 90 percent of threats.
Of course, this doesn’t just come magically; you still need to train the team to understand the threat models you’re trying to protect against, the overall security architecture, and what they need to do. By breaking it down into threats and techniques, this training is far more useful than generic “security” training and will help ensure that your applications stay secure.
By applying these four recommendations, we believe that any organization can achieve a solid level of security and start making DevSecOps a reality. If organizations don’t rethink how they approach security as they develop software, then the number of web-based vulnerabilities will continue to grow.
This is third in a series of articles that introduces and explains application programming interfaces (API) security threats, challenges, and solutions for participants in software development, operations, and protection. Explosion of APIs The API explosion is also driven by several business-oriented factors. First, enterprises are moving away from large monolithic applications that are updated annually at best. Instead, legacy and new applications are being broken into small, independently functional components, often rolled out as container-based … More
Roles across software development teams have changed as more teams adopt DevOps, according to GitLab.
The survey of over 3,650 respondents from 21 countries worldwide found that rising rates of DevOps adoption and implementation of new tools has led to sweeping changes in job functions, tool choices and organization charts within developer, security and operations teams.
“This year’s Global DevSecOps Survey shows that there are more successful DevOps practitioners than ever before and they report dramatically faster release times, truly continuous integration/deployment, and progress made toward shifting both test and security left,” said Sid Sijbrandij, CEO at GitLab.
“That said, there is still significant work to be done, particularly in the areas of testing and security. We look forward to seeing improvements in collaboration and testing across teams as they adjust to utilizing new technologies and job roles become more fluid.”
It’s a changing world for developer, operations and security teams and that holds true for roles and responsibilities as well as technology choices that improve DevOps practices and speed up release cycles. When done right, DevOps can go a long way to improve a business’s bottom line, but there are still obstacles to overcome to achieve true DevSecOps.
DevOps adoption and software development teams
Every company is now a software company and to drive business results, it is even more critical for teams to understand how the role of the developer is evolving – and how it impacts security, operations and test teams’ responsibilities.
The lines are blurring between developers and operations teams as 35% of developers say they define and/or create the infrastructure their app runs on and 14% actually monitor and respond to that infrastructure – a role traditionally held by operations.
Additionally, over 18% of developers instrument code for production monitoring, while 12% serve as an escalation point when there are incidents.
DevOps adoption rates are also up – 25% of companies are in the DevOps “sweet spot” of three to five years of practice while another 37% are well on their way, with between one and three years of experience under their belts.
As part of this implementation, many are also seeing the benefits of continuous deployment: nearly 60% deploy multiple times a day, once a day or once every few days (up from 45% last year).
As more teams become more accustomed to using DevOps in their work, roles across software development teams are starting to shift as responsibilities begin to overlap. 70% of operations professionals report that developers can provision their own environments, which is a sign of shifting responsibilities brought on by new processes and changing technologies.
Security teams unclear about responsibilities
There continues to be a clear disconnect between developers and security teams, with uncertainty about who should be responsible for security efforts. More than 25% of developers reported feeling solely responsible for security, compared to testers (23%) and operations professionals (21%).
For security teams, even more clarity is needed, with 33% of security team members saying they own security, while 29% (nearly as many) said they believe everyone should be responsible for security.
Security teams continue to report that developers are not finding enough bugs at the earliest stages of development and are slow to prioritize fixing them – a finding consistent with last year’s survey.
Over 42% said testing still happens too late in the life cycle, while 36% reported it was hard to understand, process, and fix any discovered vulnerabilities, and 31% found prioritizing vulnerability remediation an uphill battle.
“Although there is an industry-wide push to shift left, our research shows that greater clarity is needed on how teams’ daily responsibilities are changing, because it impacts the entire organization’s security proficiency,” said Johnathan Hunt, vice president of security at GitLab.
“Security teams need to implement concrete processes for the adoption of new tools and deployments in order to increase development efficiency and security capabilities.”
New technologies help with faster releases, create bottlenecks in other areas
For development teams, speed and faster software releases are key. Nearly 83% of developers report being able to release code more quickly after adopting DevOps.
Continuous integration and continuous delivery (CI/CD) is also proven to help reduce time for building and deploying applications – 38% said their DevOps implementations include CI/CD. An additional 29% said their DevOps implementations include test automation, 16% said DevSecOps, and nearly 9% use multi-cloud.
Despite this, testing has emerged as the top bottleneck for the second year in a row, according to 47% of respondents. Automated testing is on the rise, but only 12% claim to have full test automation. And, while 60% of companies report deploying multiple times a day, once a day or once every few days, over 42% said testing happens too late in the development lifecycle.
While strides toward implementing DevOps practices have been made, there is more work to be done when it comes to streamlining collaboration between security, developer and operations teams.
As breaches and hacks continue, and new vulnerabilities are uncovered, secure coding is being recognized as an increasingly important security concept — and not just for back-room techies anymore, Accurics reveals.
Cloud stack risk
“Our report clearly describes how current security practices are grossly inadequate for protecting transient cloud infrastructures, and why more than 30 billion records have been exposed through cloud breaches in just the past two years,” said Sachin Aggarwal, CEO at Accurics.
“As cloud stacks become increasingly complex, with new technologies regularly added to the mix, what’s needed is a holistic approach with consistent protection across the full cloud stack, as well as the ability to identify risks from configuration changes to deployed cloud infrastructure from a baseline established during development.
“The shift to infrastructure as code enables this; organizations now have an opportunity to redesign their cloud security strategy and move away from a point solution approach.”
Key takeaways from the research
- Misconfigurations of cloud native technologies across the full cloud native stack are a clear risk, increasing the attack surface, and being exploited by malicious actors.
- There is a significant shift towards provisioning and managing cloud infrastructure through code. This offers an opportunity for organizations to embed security earlier in the DevOps lifecycle. However, infrastructure as code is not being adequately secured, thanks in part to the lack of tools that can provide holistic protection.
- Even in scenarios where infrastructure as code actually is being governed, there are continuing problems from privileged users making changes directly to the cloud once infrastructure is provisioned. This creates posture drift from the secure baseline established through code.
Infrastructure as code
The research shows that securing cloud infrastructure in production isn’t enough. Researchers determined that only 4% of issues reported in production are actually being addressed. This is unsurprising since issue investigation and resolution at this late stage in the development lifecycle is challenging and costly.
A positive trend identified by the research is that there is a significant shift towards provisioning and managing cloud infrastructure through code to achieve agility and reliability.
Popular technologies include Terraform, Kubernetes, Docker, and OpenFaaS. Accurics’ research shows that 24% of configuration changes are made via code, which is encouraging given the fact that many of these technologies are relatively new.
Infrastructure as code provides organizations with an opportunity to embed security earlier in the development lifecycle. However, research revealed that organizations are not ensuring basic security and compliance hygiene across code.
The dangers are undeniable: high severity risks such as open security groups, overly permissive IAM roles, and exposed cloud storage services constituted 67% of the issues. This is particularly worrisome since these types of risks have been at the core of numerous high-profile cloud breaches.
The study also shows that even if organizations implement policy guardrails and security assessments across infrastructure as code, 90% of organizations allow privileged users to make configuration changes directly to cloud infrastructure after it is deployed. This unfortunately results in cloud posture drifting from the secure baseline established during development.
Recommended best practices
- The importance of protecting the full cloud native stack, including serverless, containers, platform, and infrastructure
- Embedding security earlier in the development lifecycle in order to reduce the attack surface before cloud infrastructure is provisioned, as well as monitor for incremental risks throughout its lifecycle
- Most importantly, preventing cloud posture drift from the secure baseline established during development once infrastructure is provisioned
The software development process has vastly changed in this past decade. Thanks to the relentless efforts of the cloud and virtualization technology providers, we now have nearly limitless compute and storage resources at our fingertips. One may think of this as the first wave of automation within the application development and deployment process.
With the rise in automation, machines must authenticate against each other. Authorization is nearly implicit in this handshake. Secrets are increasingly used by applications and (micro) services as a bootstrapping mechanism for initiation and continuity in operations. However, these secrets, which are largely credentials, need safe keeping and secure access in order to ultimately protect the end user. If left to their own devices, secrets will sprawl over time leading to a cornucopia of leaks and implications.
In the past, programmers, testers, and release managers found radically new ways to build and deliver applications from development sandboxes to production environments. This emphasized a more rapid software delivery for teams and the classic waterfall model was no longer as desirable for the consumers of the technology. Agile quickly became the buzzword and nearly every software team strived to become leaner in their size and methodology.
A critical requirement in the delivery lifecycle was the concept of a sprint, which divvied up each project into many bursts of short and fast cycles of articulation, programming, testing, and deployment. This drastically increased the quantity of code produced by each team, and thereby put a greater emphasis on code quality and release processes. Testing and deployment thus began their rapid ascent into automation, which has since resulted in a gargantuan proportion of secrets that are created and referenced within code. These secrets could be perceived as static or dynamic in respect to their use and longevity.
With the advent of container technology, the application team, referred to as DevOps, found newly empowered ways to build, test and release. The underlying need for hard resources faded away completely and each team now produced several copies of their software for all manner of consumption.
Containers gave new meaning to software lifecycle as many application components became fragmented with shortened lifespans. Containers would be summoned and discarded with such simplicity that application teams now had to think of their code merely as a (micro) service within a larger ecosystem. These applications would go from being stateful to stateless as services became context-aware only in the presence of secrets.
Containerization is gathering momentum, with Gartner reporting 60 percent of companies adopting it, up from 20 percent just 3 years ago. One can argue about whether Docker or Kubernetes is the more influential offering in this trend, but cloud providers are equally responsible for its adoption.
Regardless, the need for actively managing secrets is now front and center for every application team. The question is whether your application secrets are really a secret or simply a hard-to-reach set of variables. What is needed is a simple prescriptive plan for ensuring better application security for your team. It is no longer the job of DevOps but the collective responsibility of DevSecOps.
Building blocks of application security
Application and/or information security teams need more proactive prevention, while realizing that reactive detection isn’t the main tool in the arsenal. Getting ahead of adversarial code isn’t trivial, but in practice it starts with a few simple steps. Secrets are the sentries to applications and fortifying them requires a proactive approach, including:
1. Application inventory – Every information security leader should take it upon themselves to demand an audit of all applications within the enterprise. Armed with such a list, it is their responsibility to now identify the domains which are critical for business and/or sensitive to the customer. This list is by no means static and should be evaluated periodically to ensure maturing security models and threats. The list may comprise applications (and/or micro services) designed in-house or those leveraged externally from service providers.
Regardless, a matrix of all such applications and services needs to be audited for dependencies on code repositories, data storage, and cloud-augmented resources. Common externalities can be found at GitHub, GitLab, Amazon AWS, Google Cloud, Microsoft Azure, Digital Ocean, OpenStack, Docker Hub, etc. This is not a comprehensive list, so organizations should cautiously audit each application and service for its dependencies in-house as well as externally.
Upon discovery of the repositories housing the business critical or customer sensitive information, it is time to forge a plan for the security of content residing in each. This acts as a manifesto for the enterprise, to which application teams must adhere. Established practices such as peer reviews and automation tools can ensure violations are mitigated in a timely manner, if not completely avoided. Teams can appoint a Data Officer or Curator who is responsible for maintaining the standards and ensure compliance.
2. Code and resource repository standards – At a bare minimum, applications must encrypt data at-rest transparently and transmit it securely over the network or across processes. However, there are times when even computation of the data within a process needs to occur securely. These are usually privileged processes that act upon highly sensitive data and must either do so using homomorphic encryption or a secure enclave, after weighing the practicality of either approach.
The next best option is to tokenize all sensitive data so the encryption preserves the original format as per NIST publication 800-38G. Applications and services can continue to work with the tokenized content unless a privileged user or entity must ascertain the original content through an authorized request.
Whether an application relies on encryption or tokenization, it needs to store, access, and assert the rights of users and other services. Hence, it all comes down to a core set of secrets that applications rely upon in order to function normally as per the rules set forth by its owners. When it comes to management of application secrets, several guidelines are available, ranging from the OWASP Top 10 to CSRF and ESCA.
Secrets were often used primarily to encrypt data at rest and in transit but are increasingly used for identity and access management. Secrets are littered across application delivery pipelines. They are found in the code or configurations directly as credentials themselves or as references to certificates or keys that are reused with suboptimal entropy to generate secrets.
Most often these secrets manifest themselves as environment variables that are passed to containers and/or virtual hosts. Securing the secrets – and, more importantly, providing the highest level of security for access to the secrets – becomes paramount to the application architecture.
3. Centralize secrets with dynamic credentials – There is a multitude of services and products that claim to provide security for application secrets. As a CISO, it is incumbent to ask what makes a product or service secure. The answer comes down to a phrase – root of trust, which is now being uprooted by the concept of zero trust.
Almost all products and services offering secrets management are based on the former root of trust model, where the master key needs to be secured, which is not a trivial undertaking given the hybrid or complex nature of deployments and dependencies. DevOps or DevSecOps is eager to vault or conjure all secrets and summon them freely across containers, hosts, virtualized services etc. What many do not realize is that the processes running these secret repositories are quite vulnerable and leak a plethora of ancillary secrets.
Enterprises can no longer assume that teams are sufficiently mindful when it comes to application architecture, since there are so many options that check-off that box so security will allow teams to stay on track or within a budget. By allowing this to continue, enterprises have created human gatekeepers as the critical bearers of information security and thereby increase their risk of exposure and leaks.
As NIST publication 800-207 comes to bearing, many enterprises will realize the need for a true “Zero Trust” application architecture. This is available today for applications built on container orchestration platforms such as Google Kubernetes or OpenShift, as well as from leading cloud services rendered on Azure, Google and AWS. Authentication (AuthN) and authorization (AuthZ) have become intertwined and with the advent of mutual authentication, it is the foundation for building zero trust within the application.
Fundamentally, a client is always requesting a service (or server) for resources. Zero trust in this transaction would translate to validated provenance of the client and server to enable claims on resources based on associative rights. Trustworthy JSON Web Tokens are increasingly becoming the standard in this paradigm of strong security with roots in cryptography. Servers will deny any resource claims for invalidated or expired tokens and similarly clients need not accept unverifiable responses. Having centralized secrets management with strong access controls and a robust API is critical to application security.
Secrets management: Summary
The age of automation is just beginning and information security goes hand in hand with end user privacy and business continuity. We should be forewarned by the stream of attacks that often could be thwarted by simple practices that were established gradually over time at the core of the enterprise.
Application teams may find it easier to pilot a single service more securely in this manner rather than awaiting the information security leader or CISO to codify it within the enterprise. The need for a proven secrets management application or service is ever present. Pick a solution that is:
1. Flexible in its deployment model whether on-premises or natively in the cloud, or some combination (hybrid, multi-cloud etc.)
2. Secure in a way that goes beyond a simple key-value store that most secrets management providers ultimately provide
3. Capable of connecting to other applications and services through open standards such as OAuth, OpenID (SAML), LDAP, Trustworthy JWT and PKI
4. Proven to work for national agencies and regulatory bodies alike, since these entities have pivotal security considerations.
When Jordan Liggitt at Google posted details of a serious Kubernetes vulnerability in November 2018, it was a wake-up call for security teams ignoring the risks that came with adopting a cloud-native infrastructure without putting security at the heart of the whole endeavor.
For such a significant milestone in Kubernetes history, the vulnerability didn’t have a suitably alarming name comparable to the likes of Spectre, Heartbleed or the Linux Kernel’s recent SACK Panic; it was simply a CVE post on the Kubernetes GitHub repo. But CVE-2018-1002105 was a privilege escalation vulnerability that enabled a normal user to steal data from any container in a cluster. It even enabled an unauthorized user to create an unapproved service on Kubernetes, run the service in a default configuration, and inject malicious code into that service.
The first approach took advantage of pod exec/attach/portforward privileges to make a user a cluster-admin. The second method was possible as a bad actor could use the Kubernetes API server – essentially the front-end of Kubernetes through which all other components interact – to establish a connection to a back-end server and use the same connection. Crucially, this meant that the attacker could use the connection’s established TLS credentials to create their own service instances.
This was perfect privilege escalation in action, as any requests were made through an established and trusted connection and therefore didn’t appear in either the Kubernetes API server audit logs or server log. While they were theoretically visible in kubelet or aggregated API server logs, they wouldn’t appear any different to an authorized request, blending in seamlessly with the constant stream of requests.
Of course, open source versions of Kubernetes were patched quickly for this vulnerability and cloud service providers sprang into action to patch their managed services, but this was the first time that Kubernetes had experienced a critical vulnerability. It was also, as Jordan Liggett stated in his CVE post at the time, notable for there being no way to detect who and how often the vulnerability had been used.
Unfortunately, this CVE also highlighted the unprepared state of many traditional enterprise IT organizations when it came to their applications that were housed in containers. Remediation required an immediate update to Kubernetes clusters, but Kubernetes isn’t backward-compatible with every previous release. This meant some organizations faced two issues: not only did they have to provision new Kubernetes clusters but they also found their applications didn’t work any more.
The rise of containers for apps, with their clever use of namespaces and cgroups which respectively limit what system resources you can see and use, has ushered in an era of hyper-scale and flexibility for enterprises.
According to Sumo Logic’s Continuous Intelligence Report, which is derived from 2,000 companies, the use of Docker containers in production has grown from 18 per cent in 2016 to almost 30 per cent in 2019 among enterprises. Docker owes much of its success to Kubernetes. The platform built from Google’s Borg project and open-sourced for all to use has orchestrated out much of the management complexity of handling thousands of containers. However, it has created security challenges.
Since this high-profile vulnerability, other Kubernetes flaws have been found, each exposing undiscovered gaps in how companies apply security to their container-based applications. There has been the runc container exploit in February, which allowed a malicious container to overwrite the runc binary and gain root on the container host. This was followed by an irritating – but limited by authorization – Denial of Service (DoS) that exploited patch requests.
The most recent vulnerability, uncovered by StackRox, was another DoS attack which hit Kubernetes API server. This made use of an exploit in the parsing of YAML manifests to kube-apiserver. As kube-apiserver doesn’t perform an input validation on manifests or apply a manifest file size limit, it made the server susceptible to the unfunny Billion Laughs DoS attack.
Container security requires continuous security
Among the lessons to be learned from the growing number of issues discovered over time in Kubernetes is that there will be more, and they will be discoverable across the different stages of the software development lifecycle (SDLC). In other words, Kubernetes is just like any other new, critical infrastructure component introduced in an application development environment.
Discovering and addressing these new class of vulnerabilities will require continuous security monitoring across development, test and production environments. It will also require collaboration and integrated workflow between previously siloed teams from initial planning and coding all the way through to testing and into production. Many use the term DevSecOps to describe this evolution of the DevOps transitions which often accompany modern application development using containers/orchestration/etc.
Choosing a common analytics platform for your DevSecOps projects can result in substantial operational savings while also providing the fabric to deal with the unique security challenges of containers. For example, integrated insight across the tool chain and technology stacks can be leveraged to pinpoint infected nodes, run compliance checks to pick up anonymous access to the API and apply run-time defenses for containers. In many cases, container security will automatically detect and stop unusual binaries that are being exploited, for instance, attempts to access the API from an application within a compromised container.
To build, run and secure containerized apps in this DevSecOps model requires a new approach to the core visibility, detection, and investigation workflows that make up the defense. DevSecOps requires tools that supply deep visibility into your systems and can identify, investigate and prioritize security and compliance threats across the SDLC. This level of observability comes from integrated, large-scale real-time analytics that is aggregated from both structured and unstructured data from across all the systems in the complex SDLC tool chain.
While straightforward as a strategy, often the execution of this approach is frustrated by fragmented analysis tools across logs, metrics, tracing, application performance, code analysis, integrated testing, runtime testing, CI/CD, etc. This often leaves teams managing several products to connect the dots between, for example, low-level Kubernetes issues and the potential impacts they will have to security at the application layer. Traditional analytics tools often lack the basic scale and ingestion capacity to integrate the data, but equally important they also lack the native understanding of these modern data sources required to generate insight in the data without excessive programming or human analysis.
Even when adopting a smaller set of application development and testing platforms, with the scale and insight required, DevSecOps needs capabilities specifically designed for the container/orchestration problem space. First, from a discoverability standpoint the platform must provide multiple views on the data to provide situational awareness. For example, providing visual representations of both the low-level infrastructure as well as the higher-level service view helps connect both the macro and micro security picture. Also, from an observability standpoint, the system must integrate with the wide array of tools that facilitate various aspects of collection and detection (such as Prometheus, Fluentd and Falco).
Metadata in Kubernetes, in the form of labels and annotations, is used for organizing and understanding the way containers are orchestrated, so leveraging this to gain security insight with automated detection and tagging is an important capability. Finally, the system needs to assimilate the insight and data from the various discrete container security systems to provide a comprehensive view.
All of these dimensions of integration (data, analytics, workflow) demand continuous security intelligence applied across the SDLC. Securing containers and orchestration, and more broadly the entire modern application stack, cannot suffer from the delays in both planning and production of connecting dozens of fragmented analytics tools.
At a higher level, securing the modern application stack also can’t depend on the delays of integrating data, analysis, and conclusions across the functional owners of these many tools (security, IT operations, application teams, DevOps, etc). Continuous intelligence from an integrated analytics platform can break these silos and can be a critical element of securing containerized applications in a DevSecOps model.
Want to know what’s in an open source software component before you use it? Microsoft Application Inspector will tell you what it does and spots potentially unwanted features – or backdoors.
About Microsoft Application Inspector
“At Microsoft, our software engineers use open source software to provide our customers high-quality software and services. Recognizing the inherent risks in trusting open source software, we created a source code analyzer called Microsoft Application Inspector to identify ‘interesting’ features and metadata, like the use of cryptography, connecting to a remote entity, and the platforms it runs on,” Guy Acosta and Michael Scovetta, security program managers at Customer Security and Trust, Microsoft, explained the Inspector’s genesis.
The Microsoft Application Inspector:
- Is a client .NET Core-based tool that can be run from a command line in Windows, Linux or MacOS
- Uses static analysis and a customizable JSON-based rules engine to analyze the target code. Users can add/edit/remove default rule patterns (there are over 500) as well as add their own rules
- Is able to analyze code written in a variety of programming languages
Once the tool does its work, it generates a HTML “report” that shows the features, project summary and meta-data detected (see image above). JSON and TEXT output format options are supported for those who prefer them.
Each discovered feature can be broken down into more specific categories and receives a confidence indicator. Users can see for themselves the source code snippets that produced each “discovery”.
“Basically, we created Application Inspector to help us identify risky third party software components based on their specific features, but the tool is helpful in many non-security contexts as well,” the developers explained.
“For instance, it can also help identify feature deltas or changes between versions which can be critical for detecting injection of backdoors. Well constructed and hidden backdoors can go undetected by a tool that is only looking for poor security programming practices because it doesn’t look at context at a feature level.”
Nevertheless, the tool is not meant to replace security static analyzers or security code reviews, but to be used alongside them.
“Knowing what is in your software is the first step to making key choices about what actions are appropriate before allowing it to be deployed in your own or to customer environments. Our tool includes hundreds of default identifying patterns for detecting general features like frameworks used, file I/O, OS API’s as well as the ability to detect key security and privacy features of a component,” the developers concluded.
Microsoft Application Inspector is open source and available for download from GitHub.
As organizations proceed to move their processes from the physical world into the digital, their risk profile changes, too – and this is not a time to take risks. By not including security into DevOps processes, organizations are exposing their business in new and surprising ways.
DevOps has accelerated software development dramatically, but it has also created a great deal of pain for traditional security teams raised up on performing relatively slow testing. Moving from annual security testing to an almost daily security cadence has put a huge strain on legacy approaches to automated testing, with the need for a centralized team of experts to run tools that undertake static analysis and dynamic scans.
To help combat this, DevOps has spawned the “shift left” movement which focuses on including security in the software development lifecycle at an earlier stage than before. New technologies such as interactive application security testing (IAST) and runtime application self-protection (RASP) empower developers to do their own security. Automated software pipelines that provide an optimum testing infrastructure has allowed organizations to become much more effective in securing their apps. Certainly, this makes them far more effective and efficient than the old “tool soup” approach.
Done right, security can be more efficient in modern DevOps than it ever was in traditional waterfall processes. You can “shift left” to empower developers to commit secure code themselves. But don’t forget to also “extend right” to get accurate security telemetry and protection into production.
The goal of DevSecOps is to automate the process of verifying security before code goes into production, so that it runs continuously as part of your pipeline. The “Sec” is important: by closing the barn door before the horse has bolted, you improve your security posture and have better inbuilt protection against new and emerging risks.
The first principle of DevSecOps is to create a security workflow. This means breaking security work up into small pieces and carrying them to completion, rather than splitting security work across a series of gigantic phases and never connecting the dots.
Take SQL injection (SQLi) for example. Most traditional approaches would have a threat model identifying SQLi, a security architecture with defenses for SQLi, security requirements, secure coder training, security libraries to use, scanning tools, penetration testing, security code review and web application firewall rules. Yet, as they were all done independently, there was little cohesion. Security should be about traceability and that should be one of the biggest benefits of DevSecOps.
The second principle is to create tight security feedback loops. For that, read instant security feedback – the timelier the feedback the better. Anything else skyrockets the cost and demolishes the success rate. Organizations need to use technologies that provide instant and accurate feedback. Anything else is virtually useless.
The third principle of DevSecOps is to create a culture of security innovation and learning – and for good reason. Security moves fast: to stay ahead, organizations need to be agile. Yet, most organizations today simply react to their auditors, adhering to standards written years ago about problems from years before that. We need to get to a place where organizations are thinking about the risks that might exist in ten years’ time and start planning their defenses now. Future-proofing them makes sound business sense.
Embedding security: Capitalize on DevSecOps
The idea of turning security requirements, security policy, security architecture, and security coding guidelines into software is very powerful. However, imagine a simple security rule like “Applications must use the X-FRAME-OPTIONS header to prevent clickjacking”. You could put that in all the documents above and nobody would ever read it. Mistakes would continue to happen. However, if it is turned into an embedded test that checks every HTTP response within the application to ensure that the header is set properly, it would be instantly reported to developers and could be fixed swiftly and correctly.
As the list of rules to be tested is added to over time, it will ensure greater accuracy and reduce the need for already time-sapped human expert intervention. This will, in turn, accelerate software development processes, ensuring an organization’s ability to compete. It will also produce an assurance that the applications are trusted and secure.
In the past couple of years, DevSecOps has quickly gained mindshare among developers and security teams alike. But it is still very early days. Many vendors are trying to capitalize on DevSecOps by slapping some DevOps lipstick on their tool and saying that it’s the best for DevSecOps. Organizations that want to make progress in DevSecOps should ask themselves:
1. Do we have continuous inventory of our applications, APIs, components, and other code everywhere in our enterprise? (You can’t secure what you don’t know.)
2. For each application, what real evidence do we have that our applications have the right defenses and that they are effective?
3. For each application, how good is our visibility into who is attacking, what techniques are they using, and do we have runtime exploit prevention in place?
Only then can organizations fully embrace the DevSecOps revolution to keep them and their customers safe.
While nearly 75% of developers worry about the security of their applications and 85% rank security as very important in the coding and development process, nearly half of their teams lack a dedicated cybersecurity expert, according to WhiteHat Security. Application security tools While 57% of participants feel their teams have the right application security tools in place to incorporate security into the software development lifecycle (SDLC), 14% do not feel that they’ve been given the … More
The post Developers worry about security, still half of teams lack an expert appeared first on Help Net Security.