A critical vulnerability (CVE-2020-27955) in Git Large File Storage (Git LFS), an open source Git extension for versioning large files, allows attackers to achieve remote code execution if the Windows-using victim is tricked into cloning the attacker’s malicious repository using a vulnerable Git version control tool, security researcher Dawid Golunski has discovered.
It can be exploited in a variety of popular Git clients in their default configuration – GitHub CLI, GitHub Desktop, SmartGit, SourceTree, GitKraken, Visual Studio Code, etc. – and likely other clients/development IDEs (i.e., those install git with the Git LFS extension by default).
“Web applications / hosted repositories running on Windows which allow users to import their repositories from a URL may also be exposed to this vulnerability,” Golunski added.
About the vulnerability (CVE-2020-27955)
Golunski found that Git LFS does not specify a full path to git binary when executing a new git process via a specific exec.Command() function.
“As the exec.Command() implementation on Windows systems include the current directory, attackers may be able to plant a backdoor in a malicious repository by simply adding an executable file named: git.bat, git.exe, git.cmd or any other extension that is used on the victim’s system (PATHEXT environment dependent), in the main repo’s directory. As a result, the malicious git binary planted in this way will get executed instead of the original git binary located in a trusted path,” he explained.
The vulnerability can be triggered if the victim is tricked into cloning the attacker’s malicious repository using a vulnerable Git version control tool.
Golunski says that CVE-2020-27955 is trivial to exploit, and has released PoC exploit code, as well as video demonstrations of the exploit in action on various Git clients.
What to do?
The vulnerability affects Git LFS versions 2.12 or earlier on Windows systems (but not on Unix). According to the Git LFS maintainers, there is no workaround for this issue other than avoiding untrusted repositories.
Affected users and product vendors are advised to update to the latest Git LFS version (v2.12.1, released on Wednesday), which plugged the security hole. Git for Windows has also been updated to include this Git LFS version.
After five months in beta, the GitHub Code Scanning security feature has been made generally available to all users: for free for public repositories, as a paid option for private ones.
“So much of the world’s development happens on GitHub that security is not just an opportunity for us, but our responsibility. To secure software at scale, we need to make a base-level impact that can drive the most change; and that starts with the code,” Grey Baker, GitHub’s Senior Director of Product Management, told Help Net Security.
“Everything we’ve built previously was about responding to security incidents (dependency scanning, secret scanning, Dependabot) — reacting in real time, quickly. Our future state is about fundamentally preventing vulnerabilities from ever happening, by moving security core into the developer workflow.”
GitHub Code Scanning
The Code Scanning feature is powered by CodeQL, a powerful static analysis engine built by Semmle, which was acquired by GitHub in September 2019.
“We want developers to be able to use their tools of choice, for any of their projects on GitHub, all within the native GitHub experience they love. We’ve partnered with more than a dozen open source and commercial security vendors to date and we’ll continue to integrate code scanning with other third-party vendors through GitHub Actions and Apps,” Baker noted.
“The major value add here is that developers can work, and stay within, the code development ecosystem in which they’re most accustomed to while using their preferred scanning tools,” explained James Brotsos, Senior Solutions Engineer at Checkmarx.
“GitHub is an immensely popular resource for developers, so having something that ensures the security of code without hindering agility is critical. Our ability to automate SAST and SCA scans directly within GitHub repos simplifies workflows and removes tedious steps for the development cycle that can traditionally stand in the way of achieving DevSecOps.”
Checkmarx’s SCA (software composition analysis) help developers discover and remedy vulnerabilities within open source components that are being included into the application and prioritizing them accordingly based on severity. Checkmarx SAST (static application security testing) scans proprietary code bases – even uncompiled – to detect new and existing vulnerabilities.
“This is all done in an automated fashion, so as soon as a pull request takes place, a scan is triggered, and results are embedded directly into GitHub. Together, these integrations paint a holistic picture of the entire application’s security posture to ensure all potential gaps are accounted for,” Brotsos added.
Leon Juranic, CTO at DefenseCode, said that they are very excited by this initiative, as it provides access to security analysis to over 50+ million Github users.
“Having the security analysis results displayed as code scanning alerts in GitHub provides an convenient way to triage and prioritize fixes, a process that could be cumbersome usually requiring scrolling through many pages of exported reports, going back and forth between your code and the reported results, or reviewing them in dashboards provided by the security tool. The ease of use now means you can initiate scans, view, fix, and close alerts for potential vulnerabilities in your project’s code in an environment that is already familiar and where most of your other workflows are done,” he noted.
A week ago, GitHub also announced additional support for container scanning and standards and configuration scanning for infrastructure as code, with integration by 42Crunch, Accurics, Bridgecrew, Snyk, Aqua Security, and Anchore.
The benefits and future plans
“We expect code scanning to prevent thousands of vulnerabilities from ever existing, by catching them at code review time. We envisage a world with fewer software vulnerabilities because security review is an automated part of the developer workflow,” Baker explained.
“During the code scanning beta, developers fixed 72% of the security errors found by CodeQL and reported in the code scanning pull request experience. Achieving such a high fix rate is the result of years of research, as well as an integration that makes it easy to understand each result.”
Over 12,000 repositories tried code scanning during the beta, and another 7,000 have enabled it since it became generally available, he says, and the reception has been really positive, with many highlighting valuable security finds.
“We’ll continue to iterate and focus on feedback from the community, including around access control and permissions, which are of high priority to our users,” he concluded.
20% of security professionals described their organizations’ DevSecOps practices as “mature”, while 62% said they are improving practices and 18% as “immature”, a WhiteSource report finds.
The survey gathered responses from over 560 developers and application security professionals in North America and Western Europe about the state of DevSecOps implementation in their organizations.
Reaching full DevSecOps maturity
- In order to meet short deployment cycles, 73% of security professionals and developers feel forced to compromise on security.
- AppSec tools are purchased to ‘check the box’, disregarding developers’ needs and processes, resulting in tools being purchased but not used. Developers don’t fully use the tools purchased by the security team. The more the mature an organization is in terms of its DevSecOps practices, the more AppSec tools they use.
- There is a significant “AppSec knowledge and skills gaps” challenge that is largely neglected by organizations. While 60% of security professionals say they have had an AppSec program in place for at least a year, only 37% of developers surveyed reported that they were not aware of an AppSec program running for longer than a year inside their organization.
- Security professionals’ top challenge is prioritization, but organizations lack the standardized processes to streamline vulnerability prioritization.
“Survey results show that while most security professionals and developers believe that their organizations are in the process of adopting DevSecOps, most organizations still have a way to go, especially when it comes to breaking down the silos separating development at security teams,” said Rami Sass, CEO, WhiteSource.
“Full DevSecOps maturity requires organizations to implement DevSecOps across the board. Processes, tools, and culture need to evolve in order to break down the traditional silos and ensure that all teams share ownership of both security and agility.”
75% of AppSec practitioners and 49% of developers believe there is a cultural divide between their respective teams, according to ZeroNorth.
As digital transformation takes hold, it is increasingly vital that AppSec teams and developers work well together. With DevOps methodology seeing more adoption, teams are delivering software at continually higher velocities. Speed is the culture of DevOps, which often runs counter to the culture of Security – risk adverse and rigid.
The research, conducted by Ponemon Institute, surveyed 581 security practitioners and 549 developers on the cultural divide, its implications, the impact of COVID-19 and teleworking on the divide, and how to bridge the divide.
The findings of the research highlight both the software delivery and security impacts resulting from the cultural divide across AppSec and developer teams. For example, 56% of developers say AppSec stifles innovation.
On the other hand, 65% of AppSec professional believe developers do not care about securing applications early in the software development lifecycle.
Teams not sharing opininon on application risk
Importantly, too, for AppSec and developers to share a culture centered on delivering secure applications, there must be a shared understanding of risk. The teams are not aligned on this front, however. Only 35% of Developers say application risk is increasing; 60% of AppSec professionals believe this to be true.
“As this survey shows, the cultural divide is here today, and will become more exacerbated as organizations move towards DevOps, rendering the traditional, centralized model for security obsolete,” said ZeroNorth CEO, John Worrall.
“We believe this opens the doors for CISOs to become a pillar that supports the bridge between AppSec and development cultures. By enabling a culture that empowers both development and security to execute on their priorities, CISOs can transform the cultures that stifle innovation while significantly improving security.”
“This important research reveals the serious impact the AppSec and Developer cultural divide can have on an organization’s security posture,” said Larry Ponemon, chairman, Ponemon Institute.
“Based on the research findings, we recommend organizations take the following five steps to help bridge the cultural divide: (1) ensure sufficient resources are allocated to ensure applications are secured in the development and production phase of the SDLC, (2) apply application security practices consistently across the enterprise, (3) ensure developers have the knowledge and skill to address critical vulnerabilities in the application development and production life cycle, (4) conduct testing throughout the application development and (5) ensure testing methods scale efficiently from a few to many applications.”
Understanding the cultural divide and its implications
- Developer and AppSec practitioners don’t agree on which function is responsible for the security of applications. 39% of developers say the security team is responsible, while 67% of AppSec practitioners say their teams are responsible.
- AppSec and developer respondents admit working together is challenging, with AppSec respondents saying it is because the developers publish code with known vulnerabilities. Developers say security does not understand the pressure of meeting their deadlines and security stifles their ability to innovate.
- Digital transformation is putting pressure on organizations to develop applications at increasing speeds, which puts security at risk. 65% of developer respondents say they feel the pressure to develop applications faster than before the digital transformation, and 50% of AppSec respondents agree.
- 71% of AppSec respondents say the state of security is undermined by developers who don’t care about the need to secure applications early in the SDLC and 69% say developers do not have visibility into the overall state of application security.
The impact of COVID-19 and teleworking on the cultural divide
- 66% of developers and 72% of AppSec respondents say teleworking is stressful. Only 29% of developers and 38% of AppSec respondents are very confident that teleworkers are complying with organizational security and privacy requirements.
- 74% of AppSec and 47% of developer respondents say their organizations were highly effective at stopping security compromises before COVID-19. After the pandemic started, only one-third of both respondents say their effectiveness is high.
Microsoft has open-sourced OneFuzz, its own internal continuous developer-driven fuzzing platform, allowing developers around the world to receive fuzz testing results directly from their build system.
Fuzzing is an automated software testing technique that involves entering random, unexpected, malformed and/or invalid data into a computer program. The goal is to reveal exceptions (e.g., crashes, memory leaks, etc.) and unexpected behaviors that could affect the program’s security and performance.
Azure-powered continuous developer-driven fuzzing
Project OneFuzz is an extensible, self-hosted Fuzzing-As-A-Service platform for Azure that aggregates several existing fuzzers and (through automation) bakes in crash detection, coverage tracking and input harnessing.
The tool is used by Microsoft’s internal teams to strengthen the security development of Windows, Microsoft Edge, and other software products.
“Traditionally, fuzz testing has been a double-edged sword for developers: mandated by the software-development lifecycle, highly effective in finding actionable flaws, yet very complicated to harness, execute, and extract information from,” Microsoft Security principal security software engineering lead Justin Campbell and senior director for special projects management Mike Walker noted.
“That complexity required dedicated security engineering teams to build and operate fuzz testing capabilities making it very useful but expensive. Enabling developers to perform fuzz testing shifts the discovery of vulnerabilities to earlier in the development lifecycle and simultaneously frees security engineering teams to pursue proactive work.”
The tool’s capabilities
As the two explained, OneFuzz will allow developers to launch fuzz jobs – ranging in size from a few virtual machines to thousands of cores – with a single command line baked into the build system.
The tool’s features include:
- Composable fuzzing workflows: Open source allows users to onboard their own fuzzers, swap instrumentation, and manage seed inputs.
- Built-in ensemble fuzzing: By default, fuzzers work as a team to share strengths, swapping inputs of interest between fuzzing technologies.
- Programmatic triage and result deduplication: It provides unique flaw cases that always reproduce.
- On-demand live-debugging of found crashes: It lets users summon a live debugging session on-demand or from their build system.
- Transparent design that allows introspection into every stage.
- Detailed telemetry: Easy monitoring of all fuzzing
- Multi-platform by design: Fuzzing can be performed on Windows and varios Linux OSes, by using one’s own OS build, kernel, or nested hypervisor.
- Crash reporting notification callbacks: Currently supporting Azure DevOps Work Items and Microsoft Teams messages
- Code Coverage KPIs: Users can monitor their progress and motivate testing using code coverage as key metric.
OneFuzz will be available to the rest of the world in a few days (via GitHub). Microsoft will continue to update and expand it with contributions from the company’s various teams, and welcomes contributions and suggestions from the wider open-source community.
Nearly half of organizations regularly and knowingly ship vulnerable code despite using AppSec tools, according to Veracode.
Among the top reasons cited for pushing vulnerable code were pressure to meet release deadlines (54%) and finding vulnerabilities too late in the software development lifecycle (45%).
Respondents said that the lack of developer knowledge to mitigate issues and lack of integration between AppSec tools were two of the top challenges they face with implementing DevSecOps. However, nearly nine of ten companies said they would invest further in AppSec this year.
The software development landscape is evolving
The research sheds light on how AppSec practices and tools are intersecting with emerging development methods and creating new priorities such as reducing open source risk and API testing.
“The software development landscape today is evolving at light speed. Microservices-driven architecture, containers, and cloud-native applications are shifting the dynamics of how developers build, test, and deploy code. Without better testing, integration, and regular developer training, organizations will put themselves at jeopardy for a significant breach,” said Chris Wysopal, CTO at Veracode.
- 60% of organizations report having production applications exploited by OWASP Top 10 vulnerabilities in the past 12 months. Similarly, seven in 10 applications have a security flaw in an open source library on initial scan.
- Developers’ lack of knowledge on how to mitigate issues is the biggest AppSec challenge – 53% of organizations only provide security training for developers once a year or less. Data shows that the top 1% of applications with the highest scan frequency carry about five times less security debt, or unresolved flaws, than the least frequently scanned applications, which means frequent scanning helps developers find and fix flaws to significantly lower their organization’s risk.
- 43% cited DevOps integration as the most important aspect to improving their AppSec program.
- 84% report challenges due to too many AppSec tools, making DevOps integration difficult. 43% of companies report that they have between 11-20 AppSec tools in use, while 22% said they use between 21-50.
According to ESG, the most effective AppSec programs report the following as some of the critical components of their program:
- Application security is highly integrated into the CI/CD toolchain
- Ongoing, customized AppSec training for developers
- Tracking continuous improvement metrics within individual development teams
- AppSec best practices are being shared by development managers
- Using analytics to track progress of AppSec programs and to provide data to management
Need a tool to check your Python-based applications for security issues? Facebook has open-sourced Pysa (Python Static Analyzer), a tool that looks at how data flows through the code and helps developers prevent data flowing into places it shouldn’t.
How the Python Static Analyzer works
Pysa is a security-focused tool built on top of Pyre, Facebook’s performant type checker for Python.
“Pysa tracks flows of data through a program. The user defines sources (places where important data originates) as well as sinks (places where the data from the source shouldn’t end up),” Facebook security engineer Graham Bleaney and software engineer Sinan Cepel explained.
“Pysa performs iterative rounds of analysis to build summaries to determine which functions return data from a source and which functions have parameters that eventually reach a sink. If Pysa finds that a source eventually connects to a sink, it reports an issue.”
It’s used internally by Facebook to check the (Python) code that powers Instagram’s servers, and do so quickly. It’s used to check developer’s proposed code change for security and privacy issues and to prevent them being introduced in the codebase, as well as to detect existing issues in a codebase.
The found issues are flagged and, depending on their type, the report is send either to the developer or to security engineers to check it out.
“Because we use open source Python server frameworks such as Django and Tornado for our own products, Pysa can start finding security issues in projects using these frameworks from the first run. Using Pysa for frameworks we don’t already have coverage for is generally as simple as adding a few lines of configuration to tell Pysa where data enters the server,” the two engineers added.
The tool’s limitations and stumbling blocks
Pysa can’t detect all security or privacy issues, just data flow–related security issues. What’s more, it can’t detect all data flow–related issues because the Python programming language is very flexible and dynamic (allows code imports, change function call actions, etc.)
Finally, those who use it have make a choice about how many false positives and negatives they will tolerate.
“Because of the importance of catching security issues, we built Pysa to avoid false negatives and catch as many issues as possible. Reducing false negatives, however, may require trade-offs that increase false positives. Too many false positives could in turn cause alert fatigue and risk real issues being missed in the noise,” the engineers explained.
The number of false positives can reduced by using sanitizers and manually added and automatic features.
As technology constantly advances, software development teams are bombarded with security alerts at an increasing rate. This has made it nearly impossible to remediate every vulnerability, rendering the ability to properly prioritize remediation all the more critical, according to WhiteSource and CYR3CON.
This research examines the most common methods software development teams use to prioritize software vulnerabilities for remediation, and compares those practices to data gathered from the discussions of hacker communities, including the dark web and deep web.
Key research findings
- Software development teams tend to prioritize based on available data such as vulnerability severity score (CVSS), ease of remediation, and publication date, but hackers don’t target vulnerabilities based on these parameters.
- Hackers are drawn to specific vulnerability types (CWEs), including CWE-20 (Input Validation), CWE-125 (Out-of-bound Read), CWE-79 (XSS), and CWE-200 (Information Leak/Disclosure).
- Organizations tend to prioritize “fresh” vulnerabilities, while hackers often discuss vulnerabilities for over 6 months following exploitation, with even older vulnerabilities re-emerging in hacker community discussions as they reappear in new exploits or malware.
You can’t fix everything
“As development teams face an ever-rising number of disclosed vulnerabilities, it becomes impossible to fix everything and it’s imperative that teams focus on addressing the most urgent issues first,” said Rami Sass, CEO, WhiteSource.
“All too often companies unknowingly accept risk by using out-dated methods of vulnerability prioritization – and this report sheds light on the shortcomings of those approaches. Combining threat intelligence and machine learning overcomes those shortcomings, highlighting previously unidentified risks in the process,” said CYR3CON CEO Paulo Shakarian.
The COVID-19 pandemic and its impact on the world has made a growing number of people realize how many of our everyday activities depend on software.
We increasingly work, educate ourselves, play, communicate with others, consume entertainment, go shopping and do many other things in the digital world, and we depend on software and online services/apps to make that possible. Software is now everywhere and embedded within just about everything we touch.
The pandemic has also significantly accelerated companies’ digital transformation efforts and the proliferation of new software, and has stressed two undeniable facts:
- Software security is more necessary than ever before
- Automated application testing solutions that support developer workflows are the only way to achieve software security at such an intense pace and scale
Problems to solve when aiming for sofware security
When we talk about software security, we talk about proactively making an effort to create software that is nearly impenetrable to cyberattacks. We talk about working with that goal in mind during each phase of the software development lifecycle (SDLC) and finding and fixing security vulnerabilities before they have a chance of becoming a problem.
At a surface level, it sounds like a no-brainer, but there are a number of challenges organizations face when it comes to putting the idea in practice in the form of a true DevSecOps program.
Many traditional software security approaches are also falling short, either due to a lack of SDLC and developer workflow integration, a failure to cover all stages of the SDLC holistically, a disregard of developer needs, or a lack of testing automation.
Embedding security into DevOps
Slowly but surely, DevOps has become the software delivery methodology of choice for many organizations.
By aligning all the people/departments involved in software development and delivery and empowering them to work in tandem, organizations that choose the DevOps culture and implement it well are able to deliver high quality software faster. And those that choose to embed security into DevOps (DevSecOps), make the whole proposition less risky for everybody involved, including the customer.
But how to do it so that everybody involved is enthusiastically on board and satisfied? The answer is: make security testing intrinsic with the software development and delivery processes by integrating it into existing pipelines, make it automated, and embed AppSec training and awareness on top of all developer operations to ensure continuous education.
With its Software Security Platform, which merges static application security testing (SAST), software composition analysis (SCA), interactive application security testing (IAST) and in-context developer awareness and training (aka “Codebashing”), Checkmarx has all those requirements covered.
In fact, the company’s platform has recently been named by Gartner as the “best fit” for DevOps, and the company as a 2020 Gartner Magic Quadrant Leader for Application Security Testing for the third year in a row.
To them, that’s no surprise, as they are constantly working to be on the bleeding edge of software security by constantly innovating their fleet of AST solutions.
Matt Rose, Checkmarx’s Global Director of Application Security Strategy, says that they’ve seen a lot of changes in the industry throughout the years, but that their product was really designed ahead of its time and fits “unbelievably well” with the modern DevOps processes.
Not one of the aforementioned facts has gone unnoticed by private-equity firm Hellman & Friedman, which, in the midst of the COVID-19 pandemic, finalized a $1.15 billion acquisition of Checkmarx – the largest AppSec vendor acquisition to date.
The acquisition cements the company’s place in the industry as somebody that is not going away, Rose noted, and the investment will allow them to continue the forward momentum and prepare for the future in terms of providing the best application security testing platform in the world.
Developer-focused security and automation
There are a few recent additions to Checkmarx’s Software Security Platform that solve industry challenges:
- How to identify vulnerable open source components in applications and quickly remediate vulnerabilities, and
- How to simplify the automation of application security testing to reduce the friction and latency between developer and security teams.
The former comes in the form of a new SaaS-based software composition analysis (SCA) solution (CxSCA) that can be used as part of the platform or independently of it. Featuring a unique “exploitable path” capability, CxSCA leverages Checkmarx’s leading source analysis technologies to identify vulnerable open source components that are in the execution path of the vulnerability, allowing AppSec teams and developers to focus their remediation efforts on the greatest risks. This dramatically reduces time spent from the point of vulnerability detection to triage and increases developers’ productivity.
The latter is solved by Checkmarx’s unique automation capabilities via an orchestration module (CxFlow) for the platform. With this, Checkmarx enables automated scanning earlier in the code management process by integrating directly into source code management systems (think GitHub, GitLab, BitBucket, Azure DevOps), as well as providing extensive integrations with leading CI/CD tools. With developer and AppSec teams being asked to build and deploy software – that is secure – faster than ever before, the ability to automate testing within developers’ work environment is critical.
“A common way of thinking is that CI orchestration is the best place to automate application security testing capabilities. However, multiple implementation barriers – ranging from lengthy set up times to inflexible CI processes – usually accompany this approach,” Rose noted.
“With Checkmarx, we can automate the testing of the software earlier by focusing on the source code management systems. In doing so, when a developer pushes code into the source code management system when they’re done, we listen when that push or pull request is made and then automate the scanning all the way through tickets being created. Developers really benefit from this as it simplifies AST automation within DevOps, without interrupting their workflow.”
Looking ahead, Checkmarx continues to advance its offering to address the needed security for software and development trends like cloud native, microservices and containers. “DevOps is still evolving, a lot of the tooling is still evolving, and our capabilities will evolve with them,” Rose said.
Securing the application prior to release
There’s no doubt about it (and customers demand it): application security testing technologies must be automated to be effective in the modern software development arena, and Checkmarx is setting the standard. Their customers back this claim, with reviews on Gartner Peer Insights including:
- “The Checkmarx products are invaluable to our organization. They are a key element of our AppSec strategy and implementation.”
- “If your company’s developer workforce is not used to incorporating security standards into their builds, the Checkmarx stack of tools will do wonders for you in terms of integrating into your existing pipelines and providing the education via Codebashing that your developers will need.”
Other important requirements for effective AppSec testing tools include the ability to be fitted into developers’ toolchains, to cover all phases of SDLC (from coding through check-in and CI), to provide rapid feedback, and to be flexible, i.e., to allow for many different ways of implementing the technology based on the way an organization is developing software and to offer different deployment options.
Checkmarx offers all that to help organizations achieve the ultimate goal: flagging potential security vulnerabilities and risk early on, when remediation is considerably easier.
IT and application development professionals tend to exhibit risky behaviors when organizations impose strict IT policies, according to SSH.
Polling 625 IT and application development professionals across the United States, United Kingdom, France, and Germany, the survey verified that hybrid IT is on the rise and shows no signs of slowing down.
Fifty-six percent of respondents described their IT environment as hybrid cloud, an increase from 41 percent a year ago. On average, companies are actively using two cloud service vendors at a time.
While hybrid cloud offers a range of strategic benefits related to cost, performance, security, and productivity, it also introduces the challenge of managing more cloud access.
Cloud access solutions slowing down work
The survey found that cloud access solutions, including privileged access management software, slow down daily work for 71 percent of respondents. The biggest speed bumps were cited as configuring access (34 percent), repeatedly logging in and out (30 percent), and granting access to other users (29 percent).
These hurdles often drive users to seek risky workarounds, with 52 percent of respondents claiming they would “definitely” or at least “consider” bypassing secure access controls if they were under pressure to meet a deadline.
85 percent of respondents also share account credentials with others out of convenience, even though 70 percent understand the risks of doing so. These risks are further exacerbated when considering that 60 percent of respondents use unsecure methods to store their credentials and passwords, including in email, in non-encrypted files or folders, and on paper.
“As businesses grow their cloud environments, secure access to the cloud will continue be paramount. But when access controls lead to a productivity trade-off, as this research has shown, IT admins and developers are likely to bypass security entirely, opening the organization up to even greater cyber risk,” said Jussi Mononen, chief commercial officer at SSH.
“For privileged access management to be effective, it needs to be fast and convenient, without adding operational obstacles. It needs to be effortless.”
Orgs using public internet networks
In addition to exposing the risky behaviors of many IT and application development professionals when accessing the cloud, the survey also revealed some unwitting security gaps in organizations’ access management policies. For example, more than 40 percent of respondents use public internet networks – inherently less secure than private networks – to access internal IT resources.
Third-party access was also found to be a risk point, with 29 percent of respondents stating that outside contractors are given permanent access credentials to the business’ IT environment.
Permanent credentials are fundamentally risky as they provide widespread access beyond the task at hand, and can be forgotten, stolen, mismanaged, misconfigured, or lost.
Mononen continued, “When it comes to access management, simpler is safer. Methods like single sign-on can streamline the user experience significantly, by creating fewer logins and fewer entry points that reduce the forming of bad IT habits.
“There is also power in eliminating permanent access credentials entirely, using ephemeral certificates that unlock temporary ‘just-in-time’ access to IT resources, only for time needed before access automatically expires. Ultimately, reducing the capacity for human error comes down to designing security solutions that put the user first and cut out unnecessary complexity.”
An overwhelming majority of organizations prioritize software quality over speed, yet still experience customer-impacting issues regularly, according to OverOps.
The report, based on a survey of over 600 software development and delivery professionals, revealed that the current level of DevOps investment is not sufficient for ensuring software reliability. This year’s plans to invest in new tools like automated code analysis could be the key to solving this challenge.
“The move to DevOps and the increasing velocity at which teams are delivering software is unprecedented. At the same time, they are dealing with much higher levels of risk and vulnerability,” said Herb Krasner, an Advisory Board Member of the Consortium for IT Software Quality (CISQ).
“Although the issue of poor-quality software has existed since the dawn of IT, we have reached a turning point where we need to be much more serious about how we address these issues going forward. As such, the results and recommendations from the OverOps State of Software Quality report are important and timely.”
In the battle between speed and quality, most software professionals choose quality: The survey found that 70% of respondents say quality is paramount, and they would rather delay the product roadmap than risk a critical error impacting their users.
Regardless of whether a team prioritizes speed or quality, they all are encountering frequent production issues: 53% of respondents indicated they encounter critical or customer-impacting issues in production at least one or more times a month.
A quarter of participants also said that over 40% of critical production issues are first reported by end users or customers rather than internal mechanisms.
Organizations are moving faster than ever: Continuous integration (54%) and continuous delivery (42%), both hallmarks of accelerated software delivery pipelines, were among the top three areas of DevOps investment for survey respondents.
45% of all respondents said that pressure to move fast was one of their top software quality challenges. Further, over half of respondents (59%) said they release new code/features anywhere from bi-weekly to multiple times a day.
Developer productivity is suffering: As a result of frequent critical production errors, development teams are spending a considerable amount of time troubleshooting code-related issues. Two out of three survey participants report spending at least a day per week troubleshooting issues in their code, with 30% spending anywhere from 2 days to a full week.
Automated code analysis could be the next big thing in software quality: Engineering organizations are expanding the scope of their automation initiatives beyond their CI/CD pipelines. Among the top plans for adoption in 2020 were static analysis (37%) and dynamic code analysis (27%), beaten only by DevOps (58%) and microservices/containers (45%).
Seven in 10 applications have a security flaw in an open source library, highlighting how use of open source can introduce flaws, increase risk, and add to security debt, a Veracode research reveals.
Nearly all modern applications, including those sold commercially, are built using some open source components. A single flaw in one library can cascade to all applications that leverage that code.
According to Chris Eng, Chief Research Officer at Veracode, “Open source software has a surprising variety of flaws. An application’s attack surface is not limited to its own code and the code of explicitly included libraries, because those libraries have their own dependencies.
“In reality, developers are introducing much more code, but if they are aware and apply fixes appropriately, they can reduce risk exposure.”
Open source libraries are ubiquitous and pose risks
- The most commonly included libraries are present in over 75% of applications for each language.
- Most flawed libraries end up in code indirectly: 47% of those flawed libraries in applications are transitive – in other words, not pulled in directly by developers, but are being pulled in by upstream libraries. Library-introduced flaws in most applications can be fixed with only a minor version update; major library upgrades are not usually required.
Language makes a difference
- Language selection makes a difference both in terms of the size of the ecosystem and in the prevalence of flaws in those ecosystems. Including any given PHP library has a greater than 50% chance of bringing a security flaw along with it.
- Among the OWASP Top Ten flaws, weaknesses around access control are the most common, representing over 25% of all flaws. Cross-Site Scripting is the most common vulnerability category found in open source libraries – found in 30% of libraries – followed by insecure deserialization (23.5%) and broken access control (20.3%).
Roles across software development teams have changed as more teams adopt DevOps, according to GitLab.
The survey of over 3,650 respondents from 21 countries worldwide found that rising rates of DevOps adoption and implementation of new tools has led to sweeping changes in job functions, tool choices and organization charts within developer, security and operations teams.
“This year’s Global DevSecOps Survey shows that there are more successful DevOps practitioners than ever before and they report dramatically faster release times, truly continuous integration/deployment, and progress made toward shifting both test and security left,” said Sid Sijbrandij, CEO at GitLab.
“That said, there is still significant work to be done, particularly in the areas of testing and security. We look forward to seeing improvements in collaboration and testing across teams as they adjust to utilizing new technologies and job roles become more fluid.”
It’s a changing world for developer, operations and security teams and that holds true for roles and responsibilities as well as technology choices that improve DevOps practices and speed up release cycles. When done right, DevOps can go a long way to improve a business’s bottom line, but there are still obstacles to overcome to achieve true DevSecOps.
DevOps adoption and software development teams
Every company is now a software company and to drive business results, it is even more critical for teams to understand how the role of the developer is evolving – and how it impacts security, operations and test teams’ responsibilities.
The lines are blurring between developers and operations teams as 35% of developers say they define and/or create the infrastructure their app runs on and 14% actually monitor and respond to that infrastructure – a role traditionally held by operations.
Additionally, over 18% of developers instrument code for production monitoring, while 12% serve as an escalation point when there are incidents.
DevOps adoption rates are also up – 25% of companies are in the DevOps “sweet spot” of three to five years of practice while another 37% are well on their way, with between one and three years of experience under their belts.
As part of this implementation, many are also seeing the benefits of continuous deployment: nearly 60% deploy multiple times a day, once a day or once every few days (up from 45% last year).
As more teams become more accustomed to using DevOps in their work, roles across software development teams are starting to shift as responsibilities begin to overlap. 70% of operations professionals report that developers can provision their own environments, which is a sign of shifting responsibilities brought on by new processes and changing technologies.
Security teams unclear about responsibilities
There continues to be a clear disconnect between developers and security teams, with uncertainty about who should be responsible for security efforts. More than 25% of developers reported feeling solely responsible for security, compared to testers (23%) and operations professionals (21%).
For security teams, even more clarity is needed, with 33% of security team members saying they own security, while 29% (nearly as many) said they believe everyone should be responsible for security.
Security teams continue to report that developers are not finding enough bugs at the earliest stages of development and are slow to prioritize fixing them – a finding consistent with last year’s survey.
Over 42% said testing still happens too late in the life cycle, while 36% reported it was hard to understand, process, and fix any discovered vulnerabilities, and 31% found prioritizing vulnerability remediation an uphill battle.
“Although there is an industry-wide push to shift left, our research shows that greater clarity is needed on how teams’ daily responsibilities are changing, because it impacts the entire organization’s security proficiency,” said Johnathan Hunt, vice president of security at GitLab.
“Security teams need to implement concrete processes for the adoption of new tools and deployments in order to increase development efficiency and security capabilities.”
New technologies help with faster releases, create bottlenecks in other areas
For development teams, speed and faster software releases are key. Nearly 83% of developers report being able to release code more quickly after adopting DevOps.
Continuous integration and continuous delivery (CI/CD) is also proven to help reduce time for building and deploying applications – 38% said their DevOps implementations include CI/CD. An additional 29% said their DevOps implementations include test automation, 16% said DevSecOps, and nearly 9% use multi-cloud.
Despite this, testing has emerged as the top bottleneck for the second year in a row, according to 47% of respondents. Automated testing is on the rise, but only 12% claim to have full test automation. And, while 60% of companies report deploying multiple times a day, once a day or once every few days, over 42% said testing happens too late in the development lifecycle.
While strides toward implementing DevOps practices have been made, there is more work to be done when it comes to streamlining collaboration between security, developer and operations teams.
Engineers from SMU’s Darwin Deason Institute for Cybersecurity have developed software to detect ransomware attacks before attackers can inflict catastrophic damage.
Ransomware is crippling cities and businesses all over the world, and the number of ransomware attacks have increased since the start of the coronavirus pandemic. Attackers are also threatening to publicly release sensitive data if ransom isn’t paid. The FBI estimates that ransomware victims have paid hackers more than $140 million in the last six-and-a-half years.
How does software to detect ransomware work?
Unlike existing methods, such as antivirus software or other intrusion detection systems, SMU‘s new software works even if the ransomware is new and has not been used before.
This detection method is known as sensor-based ransomware detection because the software doesn’t rely on information from past ransomware infections to spot new ones on a computer. In contrast, existing technology needs signatures of past infections to do its job.
“With this software we are capable of detecting what’s called zero-day ransomware because it’s never been seen by the computer before,” said Mitch Thornton, executive director of the Deason Institute and professor of electrical and computer engineering in SMU’s Lyle School of Engineering.
“Right now, there’s little protection for zero-day ransomware, but this new software spots zero-day ransomware more than 95 percent of the time.”
Fast computer scanning
The new software also can scan a computer for ransomware much faster than existing software, said Mike Taylor, lead creator of the software and a Ph.D. student at SMU.
“The results of testing this technique indicate that rogue encryption processes can be detected within a very small fraction of the time required to completely lock down all of a user’s sensitive data files,” Taylor noted. “So the technique detects instances of ransomware very quickly and well before extensive damage occurs to the victim’s computer files.”
“Ransomware is malware that enters a victim’s computer system and silently encrypts its stored files. It then alerts the user that they must pay a ransom, typically in a non-traceable currency such as bitcoin, in order to receive the key to decrypt their files,” Thornton explained.
“It also tells the victim that if they do not pay the ransom within a certain time period, the key for decryption will be destroyed and thus, they will lose their data.”
Detecting unauthorized encryption
The software functions by searching for small, yet distinguishable changes in certain sensors that are found inside computers to detect when unauthorized encryptions are taking place.
When attackers encrypt files, certain circuits inside the computer have specific types of power surges as files are scrambled. Computer sensors that measure temperature, power consumption, voltage levels, and other characteristics can detect these specific types of surges, researchers found.
The software monitors the sensors to look for the characteristic surges. And when a suspicious surge is detected, the software immediately alerts the computer to suspend or terminate the ransomware infection from completing the encryption process.
Use of the computer’s own devices to spot ransomware “is completely different than anything else that’s out there,” Taylor said.
GitHub has made available two new security features for open and private repositories: code scanning (as a GitHub-native experience) and secret scanning.
With the former, it aims to prevent vulnerabilities from ever being introduced into software and, ideally, help developers eliminate entire bug classes forever. With the latter, it wants to make sure that developers are not inadvertently leaking secrets (e.g., cloud tokens, passwords, etc.) in their repositories.
The code scanning feature, available for set up in every GitHub repository (in the Security tab), is powered by CodeQL, a semantic code analysis engine that GitHub has made available last year.
While code analysis with CodeQL is not new, this new feature makes it part of the developers’ code review workflow.
With code scanning enabled, every ‘git push’ is scanned for potential security vulnerabilities. Results are displayed in the pull request for the developer to analyze, and additional information about the vulnerability and recommendations on how to fix things are offered, so they can learn from their mistakes.
Any public project can sign up for code scanning for free – GitHub will pay for the compute resources needed.
For a peek of how this will work in practice, check out this demonstration by Grey Baker, Director of Product Management at GitHub (start the video at 31:40):
Secret scanning (formerly “token scanning”) has been available for public repositories since 2018, but it can now be used for private repositories as well.
“With over ten million potential secrets identified, customers have asked to have the same capability for their private code. Now secret scanning also watches private repositories for known secret formats and immediately notifies developers when they are found,” explained Shanku Niyogi, Senior VP of Product at GitHub.
“We’ve worked with many partners to expand coverage, including AWS, Azure, Google Cloud, npm, Stripe, and Twilio.”
Businesses must accelerate the shift to comprehensive continuous software testing in order to remain competitive, according to a report released by Capgemini and Broadcom.
The report, based on a survey of 500 senior decision makers in corporate IT reveals that most businesses find it challenging to adapt their quality assurance and testing processes to the Agile way of working.
The crux of the challenge is that organizations find it difficult to frequently deploy a large number of releases faster into production, while also implementing an adequate, continuous, and fast validation process to prevent serious issues in production. In the absence of this balance, business performance and growth are at risk.
The report indicates that without full adoption of continuous software testing, businesses will reach a point where they will be unable to meet customer needs, making them vulnerable to more successful Agile competitors.
Orgs must invest in skilled pros and quality test solutions
While 55% of the enterprises surveyed have now adopted a continuous software testing approach, its slow increase in maturity (compared to last year) demonstrates a critical challenge for organizations to overcome.
Up to 56% of the organizations admitted they have challenges with in-sprint testing. Respondents said their teams spend 44% of their time searching, managing and generating test data, while 36% stated that their teams spend more than half their time building and managing test environments.
62% of respondents said they are struggling to find skilled professionals to build their continuous software testing strategy and a third said developing skills in testing AI systems was a priority.
These factors are compounded by the issue of larger teams being held back by legacy systems, applications and hierarchies which can make applying new ways of working more challenging. To overcome these challenges, companies must focus on embracing the orchestration of quality engineering in Agile and DevOps.
“Continuous software testing is a critical element for gaining competitive advantage in an environment where companies must deliver products faster and faster to market in order to remain relevant.
“Organizations must accelerate their investment in quality engineering skills and continuous test solutions within their Agile and DevOps teams to ensure that Agile at scale does not fail,” said Mark Buenen, Global leader of digital assurance and quality engineering services, at the Capgemini.
“To achieve this, they must empower cross-functional Agile teams with sufficient quality engineering expertise and enable the QA culture, QA automation and test environment provisioning with a flexible quality support team.”
Creating visibility over quality levels and meaningful KPIs
78% of respondents said that “getting visibility throughout the development lifecycle” is a challenge when implementing continuous software testing.
The report suggests that the entire software development lifecycle needs to be brought together in a single source of truth, from release management through to deployment, with integrated tooling, quality checks, and metrics, to meet business needs.
Leveraging more intelligent solutions
According to the report, teams need to make more use of intelligent solutions to ensure they are selecting the right test cases and validating correctly. At present only 42% make use of AI for predictive analytics, just 36% are deploying code coverage and 39% using analytics from operations.
Investment in quality assurance skills
To leverage those intelligent solutions, businesses need to invest in new skills, including knowledge of business processes, automation, data analysis and machine learning.
Most respondents (62%) said they are struggling to find skilled professionals to build their continuous software testing strategy. A third said developing skills in testing AI systems was a priority.
Test organization and environments
36% of respondents stated that they spend over half their time managing test environments – the same proportion as last year. Companies need to take a different approach, cites the report, building test environments that can be spun up, replicated, decommissioned, and managed at scale. This will involve practices included cloud provisioning (currently used by 53% of respondents), service virtualization (45%), and containerization (37%).
“Continuous quality is critical for Agile, DevOps, and digital transformation. Besides making test automation a priority, organizations also need to think of embedding quality into every phase of their software development lifecycle.
“This requires modern, developer-friendly, AI-powered tools that make continuous quality easy to adopt and practice for every stakeholder and every team— from business to technical users.
“Teams need to overcome traditional barriers to quality at scale with tools that enable shift left and shift right, and leverage AI to provide proactive, actionable insights to maximize quality,” said Sushil Kumar, head of DevOps and Continuous Testing Business, Enterprise Software Division, Broadcom.
An IT startup has developed a novel blockchain-based approach for secure linking of databases, called ChainifyDB.
“Our software resembles keyhole surgery. With a barely noticeable procedure we enhance existing database infrastructures with blockchain-based security features. Our software is seamlessly compatible with the most common database management systems, which drastically reduces the barrier to entry for secure digital transactions,” explains Jens Dittrich, Professor of Computer Science at Saarland University at Saarbrücken, Germany.
How does ChainifyDB work?
The system offers various mechanisms for a trustworthy data exchange between several parties. The following example shows one of its use cases.
Assume some doctors are treating the same patient and want to maintain his or her patient file together. To do this, the doctors would have to install the Saarbrücken researchers’ software on their existing database management systems. Then, they could jointly create a data network.
In this network, the doctors set up a shared table in which they enter the patient file for the shared patient. “If a doctor changes something in his table, it affects all other tables in the network. Subsequent changes to older table states are only possible if all doctors in the network agree,” explains Jens Dittrich.
Another special feature: If something about the table is changed, the focus is not on the change itself, but on its result. If the result is identical in all tables in the network, the changes can be accepted. If not, the consensus process starts again.
“This makes the system tamper-proof and guarantees that all network participants’ tables always have the same status. Furthermore, only the shared data in the connected tables is visible to other network participants; all other contents of the home database remain private”, emphasizes Dr. Felix Martin Schuhknecht, Principal Investigator of the project.
Advantages for security-critical situations
The new software offers advantages especially for security-critical situations, such as hacker attacks or when business partners cannot completely trust each other. Malicious participants can be excluded from a network without impairing its functionality.
If a former participant is to be reinstated, the remaining network participants only have to agree on a “correct” table state. The previously suspended partner can then be set to this state. “As far as we know, this function is not yet offered by any comparable software,” adds Dittrich.
In order to bring ChainifyDB to market, the German Federal Ministry of Education and Research is supporting the Saarbrücken researchers’ start-up, which is currently being founded, with 840,000 euros.
Every machine needs a unique identity in order to authenticate itself and communicate securely with other machines. This requirement is radically changing the definition of machines—from traditional physical devices, like laptops and servers, to virtual machines, containers, microservices, IoT devices and AI algorithms.
According to Kevin Bocek, vice president at Venafi, all of these device types have been critical to innovation and digital transformation—yet little is done to safeguard their identities.
“While the number of machines in the cloud, hybrid infrastructure and enterprise networks is exploding, most organizations are still attempting to protect machine identities using human methods like spreadsheets,” said Bocek.
“However, this approach creates its own set of problems—businesses can’t keep up with the changes in volume and are being exposed to unacceptable risks.”
Authentication is essential
Secure, reliable authentication is essential to protect machine-to-machine communication, yet protecting every machine identity across an enterprise can be a challenge. But, if machine identities are not adequately protected the resulting damage can be serious.
According to a report from AIR Worldwide, between $51 billion to $72 billion in losses to the worldwide economy could be eliminated through the proper management and protection of machine identities.
According to Bocek, five major trends are contributing to the complexity and explosive growth of machines, which in turn are creating a Machine Identity Crisis.
The business imperatives that drove widespread cloud adoption—speed, agility, efficiency and economies of scale—are also the driving forces behind DevOps. These initiatives build an agile, interdependent relationship between software development and IT operations teams.
However, the containers and microservices used in these projects often need to communicate securely with one another and the network. As a result, organizations need a technical solution designed to help them protect the barrage of new DevOps machine identities. Open APIs add to the complexity of these projects, which underlines the need for each machine to have its own unique identity.
In the cloud, machines automatically create, configure and destroy other machines in response to business demand. In order to protect the security and privacy of cloud data, businesses must encrypt cloud workload data and adequately secure the machine identities that control communication between machines.
This includes machines in the cloud and across the enterprise. The rapid deployment change and revocation of the identities for cloud-based machines exponentially increase the challenge of keeping communication within the cloud, and between clouds, secure and private.
Automation and AI
One of the major characteristics of digital transformation has been the growth in automation, and in particular, autonomous machines. Automation has delivered efficiency gains across every industry, further augmented by the introduction of Robotic Process Automation (RPA) and Intelligent RPA and underpinned by Artificial Intelligence (AI).
It is essential to the growth of these markets to maintain the integrity and security of input to these algorithms. Because machines need to communicate securely, it is important that communications are not be manipulated in any way that could change the outcomes.
The Internet of Things (IoT)
Many businesses rely on IoT devices, so their use within enterprises is exploding. Each of these machines relies on keys and certificates for authentication and security. Unfortunately, many IoT devices focus on functionality over security, so there are numerous challenges and concerns that revolve around the security of IoT and smart devices. For example, a certificate-related outage or cyberattack could result in widespread business disruption.
Organizations face escalating pressure to uniquely identify and authenticate every mobile device so they can authorize secure communication between these devices, enterprise networks and the internet.
Although smart mobile devices on enterprise networks have been a fact of life for over a decade, securing and protecting the sensitive corporate data that flows through these devices is becoming more challenging. Unfortunately, most organizations do not have the tools necessary to accomplish this.
Bocek added: “Organizations can only solve these problems with intelligent automation, and they must have complete visibility into every machine identity in the cloud, microservice, IoT network, mobile device and enterprise network.
“In addition, businesses need to monitor these identities in real time to detect misuse, misconfiguration and errors, as well as automatically remediate vulnerabilities discovered at machine speed and scale. DevOps and cloud engineering teams need to be given the speed of automation, and security teams must focus on safety.”
While nearly 75% of developers worry about the security of their applications and 85% rank security as very important in the coding and development process, nearly half of their teams lack a dedicated cybersecurity expert, according to WhiteHat Security. Application security tools While 57% of participants feel their teams have the right application security tools in place to incorporate security into the software development lifecycle (SDLC), 14% do not feel that they’ve been given the … More
The post Developers worry about security, still half of teams lack an expert appeared first on Help Net Security.
Good practices for IoT security, with a particular focus on software development guidelines for secure IoT products and services throughout their lifetime have been introduced in a report by ENISA. The number of IoT devices is rising constantly with an expected 25 billion IoT devices to be in use by 2021 according to a Gartner study. Notorious examples of IoT attacks such as Stuxnet and Mirai have led to growing concerns about the security measures … More
The post Create secure IoT products: Enable security by design appeared first on Help Net Security.