According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.
As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.
Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.
All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.
In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.
A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.
In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.
Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.
Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.
Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.
Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.
Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.
Computer scientists have developed a new artificial intelligence (AI) system that may be able to identify malicious codes that hijack supercomputers to mine for cryptocurrency such as Bitcoin and Monero.
“Based on recent computer break-ins in Europe and elsewhere, this type of software watchdog will soon be crucial to prevent cryptocurrency miners from hacking into high-performance computing facilities and stealing precious computing resources,” said Gopinath Chennupati, a researcher at Los Alamos National Laboratory and co-author of a new paper in the journal IEEE Access.
“Our deep learning artificial intelligence model is designed to detect the abusive use of supercomputers specifically for the purpose of cryptocurrency mining.”
Detect cryptocurrency miners
Legitimate cryptocurrency miners often assemble enormous computer arrays dedicated to digging up the digital cash. Less savory miners have found they can strike it rich by hijacking supercomputers, provided they can keep their efforts hidden.
The new AI system is designed to catch them in the act by comparing programs based on graphs, which are like fingerprints for software.
All programs can be represented by graphs that consist of nodes linked by lines, loops, or jumps. Much as human criminals can be caught by comparing the whorls and arcs on their fingertips to records in a fingerprint database, the new AI system compares the contours in a program’s flow-control graph to a catalog of graphs for programs that are allowed to run on a given computer.
Instead of finding a match to a known criminal program, however, the system checks to determine whether a graph is among those that identify programs that are supposed to be running on the system.
How reliable is it?
The researchers tested their system by comparing a known, benign code to an abusive, Bitcoin mining code. They found that their system identified the illicit mining operation much quicker and more reliably than conventional, non-AI analyses.
Because the approach relies on graph comparisons, it cannot be fooled by common techniques that illicit cryptocurrency miners use to disguise their codes, such as including obfuscating variables and comments intended to make the codes look like legitimate programming.
While this graph-based approach may not offer a completely foolproof solution for all scenarios, it significantly expands the set of effective approaches for cyberdetectives to use in their ongoing efforts to stifle cybercriminals.
Based on recent computer break-ins, such software watchdogs will soon be crucial to prevent cryptocurrency miners from hacking into high-performance computing facilities and stealing precious computing resources.
New research from Trend Micro highlights design flaws in legacy languages and released new secure coding guidelines. These are designed to help Industry 4.0 developers greatly reduce the software attack surface, and therefore decrease business disruption in OT environments. The layers of the software stack (including automation task programs) and what their respective vulnerabilities could affect Conducted jointly with Politecnico di Milano, the research details how design flaws in legacy programming languages could lead to … More
The post Security analysis of legacy programming environments reveals critical flaws appeared first on Help Net Security.
Programming quantum computers is becoming easier: computer scientists at ETH Zurich have designed the first programming language that can be used to program quantum computers as simply, reliably and safely as classical computers.
“Programming quantum computers is still a challenge for researchers,” says Martin Vechev, computer science professor in ETH’s Secure, Reliable and Intelligent Systems Lab (SRI), “which is why I’m so excited that we can now continue ETH Zurich’s tradition in the development of quantum computers and programming languages.”
He adds: “Our quantum programming language Silq allows programmers to utilize the potential of quantum computers better than with existing languages, because the code is more compact, faster, more intuitive and easier to understand for programmers.”
Quantum computing has been seeing increased attention over the last decade, since these computers, which function according to the principles of quantum physics, have enormous potential.
Today, most researchers believe that these computers will one day be able to solve certain problems faster than classical computers, since to perform their calculations they use entangled quantum states in which various bits of information overlap at a certain point in time. This means that in the future, quantum computers will be able to efficiently solve problems which classical computers cannot solve within a reasonable timeframe.
This quantum supremacy has still to be proven conclusively. However, some significant technical advances have been achieved recently. In late summer 2019, a quantum computer succeeded in solving a problem – albeit a very specific one – more quickly than the fastest classical computer.
For certain “quantum algorithms”, i.e. computational strategies, it is also known that they are faster than classical algorithms, which do not exploit the potential of quantum computers. To date, however, these algorithms still cannot be calculated on existing quantum hardware because quantum computers are currently still too error-prone.
Expressing the programmer’s intent
Utilizing the potential of quantum computation not only requires the latest technology, but also a quantum programming language to describe quantum algorithms. In principle, an algorithm is a “recipe” for solving a problem; a programming language describes the algorithm so that a computer can perform the necessary calculations.
Today, quantum programming languages are tied closely to specific hardware; in other words, they describe precisely the behavior of the underlying circuits. For programmers, these “hardware description languages” are cumbersome and error-prone, since the individual programming instructions must be extremely detailed and thus explicitly describe the minutiae needed to implement quantum algorithms.
This is where Vechev and his group come in with their development of Silq. “Silq is the first quantum programming language that is not designed primarily around the construction and functionality of the hardware, but on the mindset of the programmers when they want to solve a problem – without requiring them to understand every detail of the computer architecture and implementation,” says Benjamin Bichsel, a doctoral student in Vechev’s group who is supervising the development of Silq.
Computer scientists refer to computer languages that abstract from the technical details of the specific type of computer as high-level programming languages. Silq is the very first high-level programming language for quantum computers.
High-level programming languages are more expressive, meaning that they can describe even complex tasks and algorithms with less code. This makes them more comprehensible and easier to use for programmers. They can also be used with different computer architectures.
Eliminating errors through automatic uncomputation
The greatest innovation and simplification that Silq brings to quantum programming languages concerns a source of errors that has plagued quantum programming until now. A computer calculates a task in several intermediate steps, which creates intermediate results or temporary values.
In order to relieve the memory, classical computers automatically erase these values. Computer scientists refer to this as “garbage collection”, since the superfluous temporary values are disposed of.
In the case of quantum computers, this disposal is trickier due to quantum entanglement: the previously calculated values can interact with the current ones, interfering with the correct calculation. Accordingly, cleaning up such temporary values on quantum computers requires a more advanced technique of so-called uncomputation.
“Silq is the first programming language that automatically identifies and erases values that are no longer needed,” explains Bichsel. The computer scientists achieved this by applying their knowledge of classical programming languages: their automatic uncomputation method uses only programming commands that are free of any special quantum operations – they are “qfree”, as Vechev and Bichsel say.
“Silq is a major breakthrough in terms of optimising the programming of quantum computers; it is not the final phase of development,” says Vechev. There are still many open questions, but because Silq is easier to understand, Vechev and Bichsel hope to stimulate both the further development of quantum programming languages and the theory and development of new quantum algorithms.
“Our team of four has made the breakthrough after two years of work thanks to the combination of different expertise in language design, quantum physics and implementation. If other research and development teams embrace our innovations, it will be a great success,” says Bichsel.
Security issues for APIs
The many benefits that APIs bring to the software and application development communities – namely, that they are well documented, publicly available, standard, ubiquitous, efficient, and easy to use – are now being leveraged by bad actors to execute high profile attacks against public-facing applications. For example, we know that developers can use APIs to connect resources like web registration forms to many different backend systems. The resultant flexibility for tasks like backend update also provide support for automated attacks.
The security conundrum for APIs is that whereas most practitioners would recommend design decisions that make resources more hidden and less available, successful deployment of APIs demands willingness to focus on making resources open and available. This helps explain the attention on this aspect of modern computing, and why it is so important for security teams to identify good risk mitigation strategies for API usage.
Security threats to APIs
OWASP risks to APIs
In addition to its focus on risks to general software applications, OWASP has also provided useful guidance for API developers to reduce security risk in their implementations. Given the prominence of the OWASP organization in the software community, it is worth reviewing the 2019 Top 10 API Security Risks (with wording taken from the OWASP website):
1. Broken Object Level Authorization. APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface level access control issue. Object level authorization checks should be considered in every function that accesses a data source using an input from the user.
2. Broken User Authentication. Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or to exploit implementation flaws to assume other user’s identities temporarily or permanently. Compromising a system’s ability to identify the client/user compromises API security overall.
3. Excessive Data Exposure. Looking forward to generic implementations, developers tend to expose all object properties without considering their individual sensitivity, relying on clients to perform the data filtering before displaying it to the user.
4. Lack of Resources & Rate Limiting. Quite often, APIs do not impose any restrictions on the size or number of resources that can be requested by the client/user. Not only can this impact the API server performance, leading to Denial of Service (DoS), but also leaves the door open to authentication flaws such as brute force.
5. Broken Function Level Authorization. Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions, tend to lead to authorization flaws. By exploiting these issues, attackers gain access to other users’ resources and/or administrative functions.
6. Mass Assignment. Binding client provided data (e.g., JSON) to data models, without proper properties filtering based on a whitelist, usually lead to mass assignment. Either guessing objects properties, exploring other API endpoints, reading the documentation, or providing additional object properties in request payloads, allows attackers to modify object properties they are not supposed to.
7. Security Misconfiguration. Security misconfiguration is commonly a result of unsecure default configurations, incomplete or ad-hoc configurations, open cloud storage, misconfigured HTTP headers, unnecessary HTTP methods, permissive Cross-Origin resource sharing (CORS), and verbose error messages containing sensitive information.
8. Injection. Injection flaws, such as SQL, NoSQL, command injection, etc., occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s malicious data can trick the interpreter into executing unintended commands or accessing data without proper authorization.
9. Improper Assets Management. APIs tend to expose more endpoints than traditional web applications, making proper and updated documentation highly important. Proper hosts and deployed API versions inventory also play an important role to mitigate issues such as deprecated API versions and exposed debug endpoints.
10. Insufficient Logging & Monitoring. Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, extract, or destroy data. Most breach studies demonstrate the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.
API security requirements
As exemplified by the OWASP list, the cyber security community is beginning to identify many familiar, canonical issues that emerge in the use of APIs for public-facing applications. Below are five generalized cyber security requirements for APIs that come up in design and development context frequently for both legacy and new Internet applications:
The adage that knowledge-is-power seems appropriate when it comes to API visibility. Application developers and users need to know which APIs are being published, how and when they are updated, who is accessing them, and how are they being accessed. Understanding the scope of one’s API usage is the first step toward securing them.
API access is often loosely-controlled, which can lead to undesired exposure. Ensuring that the correct set of users has appropriate access permissions for each API is a critical security requirement that must be coordinated with enterprise identity and access management (IAM) systems.
In some environments, as much as 90% of the respective application traffic (e.g., account login or registration, shopping cart checkout) is generated by automated bots. Understanding and managing traffic profiles, including differentiating good bots from bad ones, is necessary to prevent automated attacks without blocking legitimate traffic. Effective complementary measures include implementing whitelist, blacklist, and rate-limiting policies, as well as geo-fencing specific to use-cases and corresponding API endpoints.
Vulnerability exploit prevention
APIs simplify attack processes by eliminating the web form or the mobile app, thus allowing a bad actor to more easily exploit a targeted vulnerability. Protecting API endpoints from business logic abuse and other vulnerability exploits is thus a key API security mitigation requirement.
Data loss prevention
Preventing data loss over exposed APIs for appropriately privileged users or otherwise, either due to programming errors or security control gaps, is also a critical security requirement. Many API attacks are designed specifically to gain access to critical data made available from back-end servers and systems.
The API community continues to drive toward more standardized agreement on the optimal approach to security. To this end, industry groups such as OAUTH, for example, have proposed criteria requirements for API security that are quite useful. The most likely progression is that the software security community will continue to refine its understanding and insight into the full range of API security requirements in the coming years. Observers should thus expect to see continued evolution in this area.
API security methods
API abuse in action
By design, APIs are stateless, assuming that the initial request and response are self-contained, holding all the information needed to complete the transaction. Making program calls to an API directly, or as part of a mobile or web application improves user experience and overall performance. This makes it very easy for a bad actor to script and automate their attack as highlighted in two examples below
Account takeover and romance fraud: Zoosk is a well-known dating application. Bad actors decompiled the Zoosk app to uncover account login APIs. Using automation and attack toolkits, they then executed account takeover attacks. In some cases, compromised accounts were used to establish a personal relationship with another Zoosk user and, as the relationship blossomed, the bad actor requested money due to a sudden death or illness in the family. The unsuspecting user gave the money to the bad actor, who was never to be heard from again. Prior to implementing Cequence, romance scams at Zoosk averaged $12,000 with each occurrence. Now, they are virtually eliminated, resulting in increased user confidence and strengthened brand awareness.
Account takeover and financial fraud: Another example of APIs being targeted with an automated attack involves a large financial services customer finding that attackers had targeted its mobile application login API to execute account takeovers. If successful, the bad actors could attempt to commit financial fraud by transferring funds across the Open Funds Transfer (OFX) API. OFX, of course, is the industry standard API for funds transfer within the financial services community, and as such the APIs are publicly-available and well-documented to facilitate use.
Contributing author: Matthew Keil, Director of Product Marketing, Cequence.
The use of open source code in modern software has become nearly ubiquitous. It makes perfect sense: facing ever-increasing pressures to accelerate the rate at which new applications are delivered, developers value the ready-made aspect of open source components which they can plug in where needed, rather than building a feature from the ground up.
Indeed, this practice has become so common that today the average application is composed mostly of open source libraries, with these components making up more than 80% of the average codebase.
But the widespread use of open source code has certain consequences. As with custom or home-grown code, open source libraries can contain vulnerabilities, and those vulnerabilities may be exploited by cybercriminals targeting these components as attack vectors to gain access to networks, intercept sensitive data, and influence or impede an application’s functionality. Open source code is distinct from custom code, however, in that its vulnerabilities – and many exploits for them – are published online, making it a particularly attractive target for malicious actors.
Calling all “chefs”
Any software developer knows that sometimes solving a problem is as simple as changing one’s perspective on the approach – which is why I’d like to introduce the “chef” analogy. It is often said that building software is like cooking fine cuisine. When cooking in your kitchen, you probably use some of your own know-how, a combination of recipes you’ve researched, and some premade ingredients that would simply be impractical to make on your own when you can get a better version right off-the-shelf. Building software that uses open source code follows much the same formula.
With this understanding, we can better visualize an approach to how to secure software in the age of open source, as a combination of selecting the right recipe, understanding your ingredients, and having the right tools and utensils in your “kitchen” to get the job done.
Finding the recipe
When getting ready to make a new dish, or in this case application, a common practice is to research a “recipe” as a starting point. Not all ‘recipes’ are created equal, and some will yield better results than others. The same applies to open source components.
Even if two components have the same name, they can be very different depending on which organization or developer community has created them, or the various iterations and forks which they have experienced. While they might share similar purpose or functionality, these components might contain slight changes that reflect the needs or preferences of the people who influenced their evolution. A good example of this is the difference between Red Hat Enterprise Linux and Ubuntu. In practice, these slight differences can add up to create a significant impact on functionality, compatibility, and security, and thus must be considered when researching which “recipe” to follow.
Choosing the best ingredients
As mentioned, vulnerabilities in open source components mean vulnerabilities in the software that leverages them. Therefore, just as it is important to know that the ingredients you’re using when cooking have not spoiled, it is essential to understand any existing vulnerabilities in the open source components being used. Ingredients that have gone bad can ruin what would otherwise be a perfectly good dish and, likewise, vulnerable open source components can ruin an otherwise secure application.
As with ingredients and food products, some vendors will issue recalls for bad batches. When using open source libraries from known organizations like Red Hat or Apache, for example, developers may receive “recall” notices by way of alerts to new vulnerabilities or patches which address security risks in the software they provide. It is quite possible, however, that a developer may need a community-driven component rather than one supported by large enterprises.
In this instance, the responsibility to identify and fix vulnerabilities falls on the developers. This is much easier said than done, as it is one thing to bear the burden of identifying and resolving these vulnerabilities by developing a new component version, and it is another to communicate the need to address the vulnerabilities to everyone using the vulnerable component version. Getting this done efficiently ultimately comes down to having the right equipment on hand.
Let “utensils” help
Just as some recipes will call for the use of a mixer while specifying that a whisk can be substituted at the cost of time, efficiency, and effectiveness, software being developed with open source code calls for its own tools to maximize quality. The equipment in a developer’s software “kitchen” is a key factor in whether or not the code they produce is secure and of high quality. When open source code is in use, Software Composition Analysis (SCA) tools are preferred for this.
SCA refers to the process of analyzing software, detecting the open source components within, and identifying associated risks, including security risks and license risks. Security risk refers to vulnerabilities that can be tracked in publicly available databases such as the National Vulnerability Database (NVD) or discovered by private security research teams. License risk can be a function of unfavorable license requirements associated with a particular component, the failure to comply with license requirements, or conflicts between unique licenses for different components within the same software project.
SCA solutions help developers by detecting open source components, giving insights into any associated vulnerabilities, and providing actionable information around risk and remediation. They also need to work well with other “appliances,” such as other security, development, and issue management tools. With the right SCA tool on hand, developers leveraging open source code can be sure that the software they ship will be much more secure.
Secure software and open source: Cooking up a masterpiece
It is always important to acknowledge that there is no silver bullet when it comes to software security, and open source is no exception. Keeping software secure is always going to take diligence and careful attention. Applications must be reviewed, then reviewed again to ensure that nothing has been missed.
Even if a developer follows all best practices, vulnerabilities can still persist, or new vulnerabilities may emerge for previously released software for where there had been no vulnerabilities. By following the advice laid out above, developers using open source code have a greater chance to be able to approach the challenge with a fresh perspective and understanding, increasing their open source security and serving software masterpieces in no time.
This is third in a series of articles that introduces and explains application programming interfaces (API) security threats, challenges, and solutions for participants in software development, operations, and protection. Explosion of APIs The API explosion is also driven by several business-oriented factors. First, enterprises are moving away from large monolithic applications that are updated annually at best. Instead, legacy and new applications are being broken into small, independently functional components, often rolled out as container-based … More
GitHub has made available two new security features for open and private repositories: code scanning (as a GitHub-native experience) and secret scanning.
With the former, it aims to prevent vulnerabilities from ever being introduced into software and, ideally, help developers eliminate entire bug classes forever. With the latter, it wants to make sure that developers are not inadvertently leaking secrets (e.g., cloud tokens, passwords, etc.) in their repositories.
The code scanning feature, available for set up in every GitHub repository (in the Security tab), is powered by CodeQL, a semantic code analysis engine that GitHub has made available last year.
While code analysis with CodeQL is not new, this new feature makes it part of the developers’ code review workflow.
With code scanning enabled, every ‘git push’ is scanned for potential security vulnerabilities. Results are displayed in the pull request for the developer to analyze, and additional information about the vulnerability and recommendations on how to fix things are offered, so they can learn from their mistakes.
Any public project can sign up for code scanning for free – GitHub will pay for the compute resources needed.
For a peek of how this will work in practice, check out this demonstration by Grey Baker, Director of Product Management at GitHub (start the video at 31:40):
Secret scanning (formerly “token scanning”) has been available for public repositories since 2018, but it can now be used for private repositories as well.
“With over ten million potential secrets identified, customers have asked to have the same capability for their private code. Now secret scanning also watches private repositories for known secret formats and immediately notifies developers when they are found,” explained Shanku Niyogi, Senior VP of Product at GitHub.
“We’ve worked with many partners to expand coverage, including AWS, Azure, Google Cloud, npm, Stripe, and Twilio.”
Researchers have discovered over 760 malicious Ruby packages (aka “gems”) typosquatting on RubyGems, the Ruby community’s gem repository / hosting service.
ReversingLabs analysts wanted to see how widespread the practice of package typosquatting is within RubyGems.
The practice refers to the intentional use of package names very similar to those of popular packages (e.g., atlas-client instead of atlas_client), with the ostensible intention of tricking users into executing them and, therefore, unknowingly running malicious code.
“We crafted a list of the most popular gems to use as a baseline. On a weekly basis, we collected gems that were newly pushed to the RubyGems repository. If we detected a new gem with a similar name to any of the baseline list gems, we flagged it as interesting for analysis,” threat analyst Tomislav Maljić explained.
After analyzing them, they found that all contained an executable file with the same filename and the PNG extension, which they assume was used to masquerade the executable as an image file. The file was also located on the same path in every gem.
The packages also contained a gemspec file – a type of file that contains basic metadata about the gem but can also include information about extensions – which runs an extension that checks the target platform and if it’s Windows, it renames the PNG file into an EXE file and executes it.
A Ruby script is then run that creates an additional script, which in its turn:
- Sreates an autorun registry key to assure persistence
- Captures the user’s clipboard data in an infinite loop
- Checks whether the data matches the format of a cryptocurrency wallet address and, if it does, replaces it with the address with an attacker-controlled one.
Its goal is to redirect all potential cryptocurrency transactions to the attacker’s wallet.
All the malicious gems were published by two accounts, which the researchers believe were created by the same threat actor. In fact, they believe that the same threat actor mounted at least two previous malicious campaigns against the RubyGems repository.
“The same file path ‘/ext/trellislike/unflaming/waffling/’ was used in all the attacks. Likewise, the malicious intent was related to cryptomining in all cases,” Maljić explained their reasoning.
ReversingLabs provided a list of the affected packages, which have since been removed from RubyGems. The two accounts created by the threat actor have been suspended.
This is not the first time threat actors tried to plant malicious packages in software repositories for popular programming languages. ReversingLabs previously flagged a batch of malicious Python libraries hosted on Python Package Index (PyPI), and developer Jussi Koljonen found that several older versions of popular Ruby packages on RubyGems were trojanized to steal information and mine cryptocurrency.
The future of business relies on being digital – but all software deployed needs to be secure and protect privacy. Yet, responsible cybersecurity gets in the way of what any company really wants to do: innovate fast, stay ahead of the competition, and wow customers!
In this podcast recorded at RSA Conference 2020, we’re joined by Ehsan Foroughi, Vice President of Products from Security Compass, an application security expert with 13+ years of management and technical experience in security research. He talks about a way of building software so that cybersecurity issues all but disappear, letting companies focus on what they do best.
Good morning. Today we have with us Ehsan Foroughi, Vice President of Products from Security Compass. We’ll be focusing on what Security Compass calls the Development Devil’s Choice and what’s being done about it. Ehsan tell me a little about yourself.
A brief introduction: I started my career in cybersecurity around 15 years ago as a researcher doing malware analysis and reverse engineering. Around eight years ago I joined an up and coming company named Security Compass. Security Compass has been around for 14 years or so, and it started as a boutique consulting firm focusing on helping developers code securely and push out the products.
When I joined SD Elements, which is the software platform and the flagship of the product was under development. I’ve worn many hats during that time. I’ve been a product manager, I’ve been a researcher, and now I own the R&D umbrella effort for the company.
Thank you. Can you tell me a little bit about Security Compass’ mission and vision?
The company’s vision is a world where people can trust technology and the way to get there is to help companies develop secure software without slowing down the business.
Here’s our first big question. The primary goals of most companies are to innovate fast, stay ahead of the competition and wow customers. Does responsible cybersecurity get in the way of that?
It certainly feels that way. Every industry nowadays relies on software to be competitive and generate revenue. Software is becoming a competitive advantage and it drives the enterprise value. As digital products are becoming critical, you’re seeing a lot of companies consider security as a first-class citizen in their DevOps effort, and they are calling it DevSecOps these days.
The problem is that when you dig into the detail, they’re mostly relying on reactive processes such as scanning and testing, which find the problems too late. By that time, they face a hard choice of whether to stop everything and go back to fix, or accept a lot of risk and move forward. We call this fast and risky development. It gets the software out to production fast, by eliminating the upfront processes, but it’s a ticking time bomb for the company and the brand. I wouldn’t want to be sitting on that.
Most companies know that they need proactive security like threat modeling, risk assessments, security training. That’s a responsible thing to do, but it’s slow and it gets in the way of the speed to the market. We call this slow and safe development. It might be safe by the way of security compliance, but it opens up to competitive risk. This is what we call the Development Devil’s Choice. Every company that relies on it has two bad choices, fast and risky or slow and safe.
Interesting. Do you believe the situation will improve over time as companies get more experienced in dealing with this dilemma?
I think it’s going to get worse over time. There are more regulations coming. A couple of years ago GDPR came up, and then it’s California Consumer Privacy Act, and then the new PCI regulations.
The technology is also getting more complex every day. We have Dockers and Kubernetes, there’s cloud identity management and the shelf life of the technology is reducing. We no longer have the 10 years end of life Linux systems that we can rely on.
So, how are companies dealing with this problem in the age of agile development?
I’m tempted to say that rather than dealing with it, they’re struggling with it. Most agile teams define their work by the way of user stories. On rare occasions, the teams take the time to compile the requirements and bake for security, and bake it into their stories. But in the majority of the cases, the security requirements are unknown and implicit. This means that they rely on people’s good judgment, and they rely on expertise. This expertise is hard to find and we do have a skill shortage in the security space. When you find them, they’re also very expensive.
How do these teams integrate security compliance into their workflow?
In our experience, most agile teams have been relying on testing and scanning to find the issues, and then that means that they have a challenge. When they uncover the issue, they have to figure out if they should go back and fix or they take the risk and move forward. Either way, it’s a lot of patchwork. When the software gets shipped, everybody crosses their fingers and hopes that everything went well. This usually leads to a lot of silos. Security becomes oppositional to development.
What happens when the silos occur? Are teams wasting their effort? Reworking software?
It adds a lot of time and anxiety. The work ends up being manual, expensive and painfully deliberate. The security compliance side of the business gets frustrated with the development, they find inconsistencies against each other and it just becomes a challenge.
No matter how companies develop software, their steps for security and compliance are likely not very accurate. That means that the management also has no visibility into what’s going on. There are lots of tools and processes today to check on the software that is being built, but usually they don’t help make it secure from the start. They usually point out to the problems and they show how it was built wrong.
Finding that out is a challenge because it exacerbates this dilemma of development versus security. It’s like being told that you didn’t need heart surgery if you ate healthy food for the past 10 years. It’s a bit too late and not particularly helpful.
I’m hearing you describe a serious problem that’s haunting company leaders. It seems they have two pretty bad options for development, fast and risky or slow and safe. Is that it? Are companies doom to choose between these two?
Well, there’s hope. There is a third option emerging. You don’t need to be fast and risky or slow and safe. The option is to be nearly as fast, without slowing down and being secure at the same time. We call it the balanced development. It’s similar to how the Waze app knows where you’re driving and tells you specifically at each step where you should be going and where you should be turning.
The key is to bring security left in the cycle, circle rapid around the development and make sure that it’s done in tandem. Testing and scanning should not find anything by the end of the cycle if this is done right. These systems mostly leverage automation to balance the development effort between the fast and risky and the slow and safe.
Ehsan, can you tell us more about these systems? How do they work and how do they support the jobs of security teams?
Well, automation is the key. It starts by capturing the knowledge of the experts into a knowledge base, and automating so that the system understands what you’re working on, what you’re doing, and delivering the actions that you need to take to bake security in right at the time you need it.
It constantly also updates the knowledge base to stay on top of the regulation changes, technology changes, and during development the teams are advised of the latest changes. When the project is finished, the system is almost done with the security and compliance actions and activities, and all of it is also documented so that the management can see what risk they are taking on.
Thank you very much for the insight and for the thoughtful discussion. What advice would you give company leaders as they start to tackle these issues?
Well, I have a couple of advice, mostly based on the companies we have been working with. I would say, stay pragmatic and balanced. Focus on getting 80% fast and 80% secure. Don’t get bogged down. Number two, I would say educate your organization, especially the executives. Executive buy-in is very important. Without that you can’t change the process and you can’t do it in silos from within one small team. You have to get people’s buy-in and support.
The next one is investing in automating the balanced approach. This investment is sometimes hard, but the earlier you do it, the better. I see a lot of companies bugged down by investing in the smaller, easier projects like updating and refreshing their scanning practice. It usually pays off to go to the heart of the problem and invest in that, because all of your future investments are more optimized.
I find it also useful when working with the developers, to always start with why? Why are you doing this? Why are you asking them to follow a certain process? If they understand the business value of it, they’ll be more cooperative with you.
And finally, try our system. We have a platform called SD Elements that enables you to automate your balanced development.
If anyone’s listening and interested in connecting with you or Security Compass, how can they find you?
Well, you should check out our website at www.securitycompass.com. We’d love to prove our motto to you: Go fast and stay safe. Thanks for joining us.
OWASP’s API Security Project has released the first edition of its top 10 list of API security risks.
The most common and perilous API security risks
API abuse is an ongoing problem and is expected to escalate in the coming years, as the number of API implementations continues to grow.
The OWASP API Security Project aims to provide software developers and code auditors with information about the risks brought on by insecure APIs.
Earlier this month, they’ve published the official OWASP API Security Top 10 list, which looks like this:
1. Broken Object Level Authorization
2. Broken User Authentication
3. Excessive Data Exposure
4. Lack of Resources & Rate Limiting
5. Broken Function Level Authorization
6. Mass Assignment
7. Security Misconfiguration
9. Improper Assets Management
10. Insufficient Logging & Monitoring
Each of the risks comes with an explanation, example attack scenarios and advice on how to mitigate it it. It also includes links to helpful free resources (education material, guides, cheat sheets, etc.) for developers and DevSecOps practitioners.
The document can be downloaded from GitHub.
“There are issues that look simple, but are critical, like good housekeeping and documenting APIs. There are also complex issues of access control that might require some attention from the design phase,” Erez Yalon, director of security research at Checkmarx and co-lead on the OWASP API Security Project, told Help Net Security.
“To put it simply, follow this list closely – OWASP has done the groundwork for development teams and security professionals to improve their knowledge around security risks to look out for when implementing APIs. Understanding the vulnerabilities outlined within will help teams to mitigate against API security risks and to put systems into place moving forward.”
This first version of the list has been based on publicly available data about API security incidents, security experts’ contributions, and discussion with security practitioners.
“We are planning another version of the OWASP API Security Top 10 in 2020,” he noted.
“This time, in addition to using the knowledge of the AppSec community, we will also use a public call for data that will enable us to fine-tune the list. Additionally, we will be working on a cheat sheet that will be a more practical guide for developers, pen-testers, and auditors.”
As adversaries set their sights on this emerging target, awareness and education around the security pitfalls outlined in the OWASP API Security Top 10 list will be key to the development of secure applications in the future, he concluded.