Researchers aim to improve code patching in embedded systems

Three Purdue University researchers and their teammates at the University of California, Santa Barbara and Swiss Federal Institute of Technology Lausanne have received a DARPA grant to fund research that will improve the process of patching code in vulnerable embedded systems.

code patching embedded systems

“Many embedded systems, like computer systems running in trucks, airplanes and medical devices, run old code for which the source code and the original compilation toolchain are unavailable,” Antonio Bianchi, assistant professor of computer science at Purdue University said.

“Many old software components running in these systems are known to contain vulnerabilities; however, patching them to fix these vulnerabilities is not always possible or easy.”

Without source code, patching a vulnerability necessitates editing the binary code directly, Bianchi said. Additionally, even in a system that has been patched, there is no guarantee that the patch will not interfere with the original functionality of the device. Because of these difficulties, he said, the code running in embedded systems is often left unpatched, even when it is known to be vulnerable.

Ensuring the patch doesn’t interfere with device functionality

The team’s proposed approach entails defining and verifying a set of properties that a patch must have to ensure it doesn’t interfere with the device’s original functionality. The research also aims to develop automatic and minimal code patching for devices that may be vulnerable to cyberattacks.

Minimizing modifications, Bianchi said, will require minimal resources to verify the patched code and prevent the device’s functionality from being harmed. In addition, they will also develop new ways to test the patched code, which does not require it to run on real hardware.

Organizations knowingly ship vulnerable code despite using AppSec tools

Nearly half of organizations regularly and knowingly ship vulnerable code despite using AppSec tools, according to Veracode.

ship vulnerable code

Among the top reasons cited for pushing vulnerable code were pressure to meet release deadlines (54%) and finding vulnerabilities too late in the software development lifecycle (45%).

Respondents said that the lack of developer knowledge to mitigate issues and lack of integration between AppSec tools were two of the top challenges they face with implementing DevSecOps. However, nearly nine of ten companies said they would invest further in AppSec this year.

The software development landscape is evolving

The research sheds light on how AppSec practices and tools are intersecting with emerging development methods and creating new priorities such as reducing open source risk and API testing.

“The software development landscape today is evolving at light speed. Microservices-driven architecture, containers, and cloud-native applications are shifting the dynamics of how developers build, test, and deploy code. Without better testing, integration, and regular developer training, organizations will put themselves at jeopardy for a significant breach,” said Chris Wysopal, CTO at Veracode.

Key findings

  • 60% of organizations report having production applications exploited by OWASP Top 10 vulnerabilities in the past 12 months. Similarly, seven in 10 applications have a security flaw in an open source library on initial scan.
  • Developers’ lack of knowledge on how to mitigate issues is the biggest AppSec challenge – 53% of organizations only provide security training for developers once a year or less. Data shows that the top 1% of applications with the highest scan frequency carry about five times less security debt, or unresolved flaws, than the least frequently scanned applications, which means frequent scanning helps developers find and fix flaws to significantly lower their organization’s risk.
  • 43% cited DevOps integration as the most important aspect to improving their AppSec program.
  • 84% report challenges due to too many AppSec tools, making DevOps integration difficult. 43% of companies report that they have between 11-20 AppSec tools in use, while 22% said they use between 21-50.

ship vulnerable code

According to ESG, the most effective AppSec programs report the following as some of the critical components of their program:

  • Application security is highly integrated into the CI/CD toolchain
  • Ongoing, customized AppSec training for developers
  • Tracking continuous improvement metrics within individual development teams
  • AppSec best practices are being shared by development managers
  • Using analytics to track progress of AppSec programs and to provide data to management

Tech sector job interviews test performance anxiety rather than competence at coding

A study from North Carolina State University and Microsoft finds that the technical interviews currently used in hiring for many software engineering positions test whether a job candidate has performance anxiety rather than whether the candidate is competent at coding. The interviews may also be used to exclude groups or favor specific job candidates.

tech sector job interviews

“Technical interviews are feared and hated in the industry, and it turns out that these interview techniques may also be hurting the industry’s ability to find and hire skilled software engineers,” says Chris Parnin, an assistant professor of computer science at NC State and co-author of a paper on the work.

“Our study suggests that a lot of well-qualified job candidates are being eliminated because they’re not used to working on a whiteboard in front of an audience.”

The effect of the interview process on aspiring software engineers

Technical interviews in the software engineering sector generally take the form of giving a job candidate a problem to solve, then requiring the candidate to write out a solution in code on a whiteboard – explaining each step of the process to an interviewer.

Previous research found that many developers in the software engineering community felt the technical interview process was deeply flawed. So the researchers decided to run a study aimed at assessing the effect of the interview process on aspiring software engineers.

For this study, researchers conducted technical interviews of 48 computer science undergraduates and graduate students. Half of the study participants were given a conventional technical interview, with an interviewer looking on. The other half of the participants were asked to solve their problem on a whiteboard in a private room.

The private interviews did not require study participants to explain their solutions aloud, and had no interviewers looking over their shoulders.

Researchers measured each study participant’s interview performance by assessing the accuracy and efficiency of each solution. In other words, they wanted to know whether the code they wrote would work, and the amount of computing resources needed to run it.

“People who took the traditional interview performed half as well as people that were able to interview in private,” Parnin says. “In short, the findings suggest that companies are missing out on really good programmers because those programmers aren’t good at writing on a whiteboard and explaining their work out loud while coding.”

The current format of technical interviews excluding certain job candidates

The researchers also note that the current format of technical interviews may also be used to exclude certain job candidates.

“For example, interviewers may give easier problems to candidates they prefer,” Parnin says. “But the format may also serve as a barrier to entire classes of candidates. For example, in our study, all of the women who took the public interview failed, while all of the women who took the private interview passed.

“Our study was limited, and a larger sample size would be needed to draw firm conclusions, but the idea that the very design of the interview process may effectively exclude an entire class of job candidates is troubling.”

What’s more, the specific nature of the technical interview process means that many job candidates try to spend weeks or months training specifically for the technical interview, rather than for the actual job they’d be doing.

“The technical interview process gives people with industry connections an advantage,” says Mahnaz Behroozi, first author of study and a Ph.D. student at NC State.

“But it gives a particularly large advantage to people who can afford to take the time to focus solely on preparing for an interview process that has very little to do with the nature of the work itself. And the problems this study highlights are in addition to a suite of other problems associated with the hiring process in the tech sector,” adds Behroozi.

“If the tech sector can address all of these challenges in a meaningful way, it will make significant progress in becoming more fair and inclusive. More to the point, the sector will be drawing from a larger and more diverse talent pool, which would contribute to better work.”

A Boxcryptor audit shows no critical weaknesses in the software

More and more companies, self-employed and private customers are using Boxcryptor to protect sensitive data – primarily in the cloud. Boxcryptor ensures that nobody but authorized persons have access to the data. Cloud providers and their staff, as well as potential hackers are reliably excluded. The audit verified whether this protection is guaranteed.

During the audit, Kudelski was given access to the source code of Boxcryptor for Windows and to the internal documentation.

“All these components were logically correct and did not show any significant weakness under scrutiny. It is important to note that the codebase we audited was not showing any signs of malicious intent.”

The goal of the audit

The goal of the audit was to give all interested parties an indirect insight into the software so that they can be sure that no backdoors or security holes are found in the code.

Robert Freudenreich, CTO of Boxcryptor, about the benefits of an audit: “For private users, Boxcryptor is a means of digital self-defense against curious third parties, for companies and organizations a way to achieve true GDPR compliance and complete control over business data. With software that is so security relevant, it is understandable that users want to be sure that the software is flawless.”

The audit process started at the beginning of May with short communication lines to the developers and managers in the Boxcryptor team. If Kudelski had found a serious security vulnerability, they would not have held it back until the final report, but would have reported the problem immediately.

A problem rated as “medium”

The problem rated as medium is a part of the code that affects the connection to cloud providers using the WebDAV protocol. Theoretically, the operators of such cloud storage providers could have tried to inject code into Boxcryptor for Windows.

In practice, however, this code was never used by Boxcryptor, so there was no danger for Boxcryptor users at any time. In response to the audit, this redundant part of the code was removed.

Two problems classified as “low” and further observations

One problem classified as low concerns the user password: to protect users with insecure passwords, it was suggested that passwords be hashed even more frequently and that the minimum password length be increased, which we implemented immediately.

The second problem classified as low was theoretical and concerned the reading of the Boxcryptor configuration.

How secure are open source libraries?

Seven in 10 applications have a security flaw in an open source library, highlighting how use of open source can introduce flaws, increase risk, and add to security debt, a Veracode research reveals.

secure open source libraries

Nearly all modern applications, including those sold commercially, are built using some open source components. A single flaw in one library can cascade to all applications that leverage that code.

According to Chris Eng, Chief Research Officer at Veracode, “Open source software has a surprising variety of flaws. An application’s attack surface is not limited to its own code and the code of explicitly included libraries, because those libraries have their own dependencies.

“In reality, developers are introducing much more code, but if they are aware and apply fixes appropriately, they can reduce risk exposure.”

Open source libraries are ubiquitous and pose risks

  • The most commonly included libraries are present in over 75% of applications for each language.
  • Most flawed libraries end up in code indirectly: 47% of those flawed libraries in applications are transitive – in other words, not pulled in directly by developers, but are being pulled in by upstream libraries. Library-introduced flaws in most applications can be fixed with only a minor version update; major library upgrades are not usually required.
  • Not all libraries have Common Vulnerabilities and Exposures (CVEs) – this means developers can’t rely only on CVEs to understand library flaws. For example, more than 61% of flawed libraries in JavaScript contain vulnerabilities without corresponding CVEs.

secure open source libraries

Language makes a difference

  • Some language ecosystems tend to pull in many more transitive dependencies than others. In more than 80% of JavaScript, Ruby, and PHP applications, the majority of libraries are transitive dependencies.
  • Language selection makes a difference both in terms of the size of the ecosystem and in the prevalence of flaws in those ecosystems. Including any given PHP library has a greater than 50% chance of bringing a security flaw along with it.
  • Among the OWASP Top Ten flaws, weaknesses around access control are the most common, representing over 25% of all flaws. Cross-Site Scripting is the most common vulnerability category found in open source libraries – found in 30% of libraries – followed by insecure deserialization (23.5%) and broken access control (20.3%).

Technologies in all layers of the cloud stack are at risk

As breaches and hacks continue, and new vulnerabilities are uncovered, secure coding is being recognized as an increasingly important security concept — and not just for back-room techies anymore, Accurics reveals.

cloud stack risk

Cloud stack risk

“Our report clearly describes how current security practices are grossly inadequate for protecting transient cloud infrastructures, and why more than 30 billion records have been exposed through cloud breaches in just the past two years,” said Sachin Aggarwal, CEO at Accurics.

“As cloud stacks become increasingly complex, with new technologies regularly added to the mix, what’s needed is a holistic approach with consistent protection across the full cloud stack, as well as the ability to identify risks from configuration changes to deployed cloud infrastructure from a baseline established during development.

“The shift to infrastructure as code enables this; organizations now have an opportunity to redesign their cloud security strategy and move away from a point solution approach.”

Key takeaways from the research

  • Misconfigurations of cloud native technologies across the full cloud native stack are a clear risk, increasing the attack surface, and being exploited by malicious actors.
  • There is a significant shift towards provisioning and managing cloud infrastructure through code. This offers an opportunity for organizations to embed security earlier in the DevOps lifecycle. However, infrastructure as code is not being adequately secured, thanks in part to the lack of tools that can provide holistic protection.
  • Even in scenarios where infrastructure as code actually is being governed, there are continuing problems from privileged users making changes directly to the cloud once infrastructure is provisioned. This creates posture drift from the secure baseline established through code.

Infrastructure as code

The research shows that securing cloud infrastructure in production isn’t enough. Researchers determined that only 4% of issues reported in production are actually being addressed. This is unsurprising since issue investigation and resolution at this late stage in the development lifecycle is challenging and costly.

A positive trend identified by the research is that there is a significant shift towards provisioning and managing cloud infrastructure through code to achieve agility and reliability.

Popular technologies include Terraform, Kubernetes, Docker, and OpenFaaS. Accurics’ research shows that 24% of configuration changes are made via code, which is encouraging given the fact that many of these technologies are relatively new.

Infrastructure as code provides organizations with an opportunity to embed security earlier in the development lifecycle. However, research revealed that organizations are not ensuring basic security and compliance hygiene across code.

The dangers are undeniable: high severity risks such as open security groups, overly permissive IAM roles, and exposed cloud storage services constituted 67% of the issues. This is particularly worrisome since these types of risks have been at the core of numerous high-profile cloud breaches.

The study also shows that even if organizations implement policy guardrails and security assessments across infrastructure as code, 90% of organizations allow privileged users to make configuration changes directly to cloud infrastructure after it is deployed. This unfortunately results in cloud posture drifting from the secure baseline established during development.

cloud stack risk

Recommended best practices

  • The importance of protecting the full cloud native stack, including serverless, containers, platform, and infrastructure
  • Embedding security earlier in the development lifecycle in order to reduce the attack surface before cloud infrastructure is provisioned, as well as monitor for incremental risks throughout its lifecycle
  • Most importantly, preventing cloud posture drift from the secure baseline established during development once infrastructure is provisioned

Eye-opening statistics about open source security, license compliance, and code quality risk

99% of commercial codebases contain at least one open source component, with open source comprising 70% of the code overall, according to Synopsys.

open source components security

Open source components and security

More notable is the continued widespread use of aging or abandoned open source components, with 91% of the codebases containing components that either were more than four years out of date or had seen no development activity in the last two years.

The most concerning trend in this year’s analysis is the mounting security risk posed by unmanaged open source, with 75% of audited codebases containing open source components with known security vulnerabilities, up from 60% the previous year. Similarly, nearly half (49%) of the codebases contained high-risk vulnerabilities, compared to 40% just 12 months prior.

“It’s difficult to dismiss the vital role that open source plays in modern software development and deployment, but it’s easy to overlook how it impacts your application risk posture from a security and license compliance perspective,” said Tim Mackey, principal security strategist of the Synopsys Cybersecurity Research Center.

Open source adoption continues to soar

Ninety-nine percent of codebases contain at least some open source, with an average of 445 open source components per codebase—a significant increase from 298 in 2018.

Seventy percent of the audited code was identified as open source, a figure that increased from 60% in 2018 and has nearly doubled since 2015 (36%).

Outdated and “abandoned” components are pervasive

Ninety-one percent of codebases contained components that either were more than four years out of date or had no development activity in the past two years.

Beyond the increased likelihood that security vulnerabilities exist, the risk of using outdated open source components is that updating them can also introduce unwanted functionality or compatibility issues.

open source components security

Use of vulnerable open source components trending upward again

In 2019, the percentage of codebases containing vulnerable open source components rose to 75% after dropping from 78% to 60% between 2017 and 2018. Similarly, the percentage of codebases containing high-risk vulnerabilities jumped up to 49% in 2019 from 40% in 2018.

Fortunately, none of codebases audited in 2019 were impacted by the infamous Heartbleed bug or the Apache Struts vulnerability that haunted Equifax in 2017.

Open source license conflicts continue to put intellectual property at risk

Despite its reputation for being “free,” open source software is no different from any other software in that its use is governed by a license. Sixty-eight percent of codebases contained some form of open source license conflict, and 33% contained open source components with no identifiable license.

The prevalence of license conflicts varied significantly by industry, ranging from a high of 93% (Internet & Mobile Apps) to a relatively low of 59% (Virtual Reality, Gaming, Entertainment, Media).

Automate manual security, risk, and compliance processes in software development

The future of business relies on being digital – but all software deployed needs to be secure and protect privacy. Yet, responsible cybersecurity gets in the way of what any company really wants to do: innovate fast, stay ahead of the competition, and wow customers!

SD Elements

In this podcast recorded at RSA Conference 2020, we’re joined by Ehsan Foroughi, Vice President of Products from Security Compass, an application security expert with 13+ years of management and technical experience in security research. He talks about a way of building software so that cybersecurity issues all but disappear, letting companies focus on what they do best.

Good morning. Today we have with us Ehsan Foroughi, Vice President of Products from Security Compass. We’ll be focusing on what Security Compass calls the Development Devil’s Choice and what’s being done about it. Ehsan tell me a little about yourself.

A brief introduction: I started my career in cybersecurity around 15 years ago as a researcher doing malware analysis and reverse engineering. Around eight years ago I joined an up and coming company named Security Compass. Security Compass has been around for 14 years or so, and it started as a boutique consulting firm focusing on helping developers code securely and push out the products.

When I joined SD Elements, which is the software platform and the flagship of the product was under development. I’ve worn many hats during that time. I’ve been a product manager, I’ve been a researcher, and now I own the R&D umbrella effort for the company.

Thank you. Can you tell me a little bit about Security Compass’ mission and vision?

The company’s vision is a world where people can trust technology and the way to get there is to help companies develop secure software without slowing down the business.

Here’s our first big question. The primary goals of most companies are to innovate fast, stay ahead of the competition and wow customers. Does responsible cybersecurity get in the way of that?

It certainly feels that way. Every industry nowadays relies on software to be competitive and generate revenue. Software is becoming a competitive advantage and it drives the enterprise value. As digital products are becoming critical, you’re seeing a lot of companies consider security as a first-class citizen in their DevOps effort, and they are calling it DevSecOps these days.

The problem is that when you dig into the detail, they’re mostly relying on reactive processes such as scanning and testing, which find the problems too late. By that time, they face a hard choice of whether to stop everything and go back to fix, or accept a lot of risk and move forward. We call this fast and risky development. It gets the software out to production fast, by eliminating the upfront processes, but it’s a ticking time bomb for the company and the brand. I wouldn’t want to be sitting on that.

Most companies know that they need proactive security like threat modeling, risk assessments, security training. That’s a responsible thing to do, but it’s slow and it gets in the way of the speed to the market. We call this slow and safe development. It might be safe by the way of security compliance, but it opens up to competitive risk. This is what we call the Development Devil’s Choice. Every company that relies on it has two bad choices, fast and risky or slow and safe.

Interesting. Do you believe the situation will improve over time as companies get more experienced in dealing with this dilemma?

I think it’s going to get worse over time. There are more regulations coming. A couple of years ago GDPR came up, and then it’s California Consumer Privacy Act, and then the new PCI regulations.

The technology is also getting more complex every day. We have Dockers and Kubernetes, there’s cloud identity management and the shelf life of the technology is reducing. We no longer have the 10 years end of life Linux systems that we can rely on.

SD Elements

So, how are companies dealing with this problem in the age of agile development?

I’m tempted to say that rather than dealing with it, they’re struggling with it. Most agile teams define their work by the way of user stories. On rare occasions, the teams take the time to compile the requirements and bake for security, and bake it into their stories. But in the majority of the cases, the security requirements are unknown and implicit. This means that they rely on people’s good judgment, and they rely on expertise. This expertise is hard to find and we do have a skill shortage in the security space. When you find them, they’re also very expensive.

How do these teams integrate security compliance into their workflow?

In our experience, most agile teams have been relying on testing and scanning to find the issues, and then that means that they have a challenge. When they uncover the issue, they have to figure out if they should go back and fix or they take the risk and move forward. Either way, it’s a lot of patchwork. When the software gets shipped, everybody crosses their fingers and hopes that everything went well. This usually leads to a lot of silos. Security becomes oppositional to development.

What happens when the silos occur? Are teams wasting their effort? Reworking software?

It adds a lot of time and anxiety. The work ends up being manual, expensive and painfully deliberate. The security compliance side of the business gets frustrated with the development, they find inconsistencies against each other and it just becomes a challenge.

No matter how companies develop software, their steps for security and compliance are likely not very accurate. That means that the management also has no visibility into what’s going on. There are lots of tools and processes today to check on the software that is being built, but usually they don’t help make it secure from the start. They usually point out to the problems and they show how it was built wrong.

Finding that out is a challenge because it exacerbates this dilemma of development versus security. It’s like being told that you didn’t need heart surgery if you ate healthy food for the past 10 years. It’s a bit too late and not particularly helpful.

I’m hearing you describe a serious problem that’s haunting company leaders. It seems they have two pretty bad options for development, fast and risky or slow and safe. Is that it? Are companies doom to choose between these two?

Well, there’s hope. There is a third option emerging. You don’t need to be fast and risky or slow and safe. The option is to be nearly as fast, without slowing down and being secure at the same time. We call it the balanced development. It’s similar to how the Waze app knows where you’re driving and tells you specifically at each step where you should be going and where you should be turning.

The key is to bring security left in the cycle, circle rapid around the development and make sure that it’s done in tandem. Testing and scanning should not find anything by the end of the cycle if this is done right. These systems mostly leverage automation to balance the development effort between the fast and risky and the slow and safe.

SD Elements

Ehsan, can you tell us more about these systems? How do they work and how do they support the jobs of security teams?

Well, automation is the key. It starts by capturing the knowledge of the experts into a knowledge base, and automating so that the system understands what you’re working on, what you’re doing, and delivering the actions that you need to take to bake security in right at the time you need it.

It constantly also updates the knowledge base to stay on top of the regulation changes, technology changes, and during development the teams are advised of the latest changes. When the project is finished, the system is almost done with the security and compliance actions and activities, and all of it is also documented so that the management can see what risk they are taking on.

Thank you very much for the insight and for the thoughtful discussion. What advice would you give company leaders as they start to tackle these issues?

Well, I have a couple of advice, mostly based on the companies we have been working with. I would say, stay pragmatic and balanced. Focus on getting 80% fast and 80% secure. Don’t get bogged down. Number two, I would say educate your organization, especially the executives. Executive buy-in is very important. Without that you can’t change the process and you can’t do it in silos from within one small team. You have to get people’s buy-in and support.

The next one is investing in automating the balanced approach. This investment is sometimes hard, but the earlier you do it, the better. I see a lot of companies bugged down by investing in the smaller, easier projects like updating and refreshing their scanning practice. It usually pays off to go to the heart of the problem and invest in that, because all of your future investments are more optimized.

I find it also useful when working with the developers, to always start with why? Why are you doing this? Why are you asking them to follow a certain process? If they understand the business value of it, they’ll be more cooperative with you.

And finally, try our system. We have a platform called SD Elements that enables you to automate your balanced development.

If anyone’s listening and interested in connecting with you or Security Compass, how can they find you?

Well, you should check out our website at www.securitycompass.com. We’d love to prove our motto to you: Go fast and stay safe. Thanks for joining us.