The majority of applications contain at least one security flaw and fixing those flaws typically takes months, a Veracode report reveals.
This year’s analysis of 130,000 applications found that it takes about six months for teams to close half the security flaws they find.
The report also uncovered some best practices to significantly improve these fix rates. There are some factors that teams have a lot of control over, and those they have very little control over categorizing them as “nature vs. nurture”.
Within the “nature” side, factors such as the size of the application and organization as well as security debt were considered, while the “nurture” side accounts for actions such as scanning frequency, cadence, and scanning via APIs.
Fixing security flaws: Nature or nurture?
The report revealed that addressing issues with modern DevSecOps practices results in higher flaw remediation rates. For example, using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API all make a difference in reducing time to fix security defects, even in apps with a less than ideal “nature.”
“The goal of software security isn’t to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner,” said Chris Eng, Chief Research Officer at Veracode.
“Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools.”
Other key findings
Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, 30% of applications have more flaws in their open source libraries than in the code written in-house.
The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in a prior research.
This year’s report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. Older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.
Google aims to improve security of browser engines, third-party Android devices and apps on Google Play
Researchers must also bear the costs of fuzzing in advance, even though there’s a possibility their approach may not discover any bugs or if it does, that they’ll receive a reward for finding them. This fact might deter many of them and, consequently, bugs stay unfixed and exploitable for longer.
That’s why Google is offering $5,000 research grants in the form of Google Compute Engine credits.
Helping third parties in the Android ecosystem
The company is also set on improving the security of the Android ecosystem, and to that point it’s launching the Android Partner Vulnerability Initiative (APVI).
“Until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP (Android Open Source Project) code that are unique to a much smaller set of specific Android OEMs,” the company explained.
“The APVI […] covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).”
Already discovered issues and those yet to be unearthed have been/will be shared through this bug tracker.
Simultaneously, the company has is looking for a Security Engineering Manager in Android Security that will, among other things, lead a team that “will perform application security assessments against highly sensitive, third party Android apps on Google Play, working to identify vulnerabilities and provide remediation guidance to impacted application developers.”
NIST has launched a crowdsourcing challenge to spur new methods to ensure that important public safety data sets can be de-identified to protect individual privacy.
The Differential Privacy Temporal Map Challenge includes a series of contests that will award a total of up to $276,000 for differential privacy solutions for complex data sets that include information on both time and location.
Critical applications vulnerability
For critical applications such as emergency planning and epidemiology, public safety responders may need access to sensitive data, but sharing that data with external analysts can compromise individual privacy.
Even if data is anonymized, malicious parties may be able to link the anonymized records with third-party data and re-identify individuals. And, when data has both geographical and time information, the risk of re-identification increases significantly.
“Temporal map data, with its ability to track a person’s location over a period of time, is particularly helpful to public safety agencies when preparing for disaster response, firefighting and law enforcement tactics,” said Gary Howarth, NIST prize challenge manager.
“The goal of this challenge is to develop solutions that can protect the privacy of individual citizens and first responders when agencies need to share data.”
Differential privacy provides much stronger data protection than anonymity; it’s a provable mathematical guarantee that protects personally identifiable information (PII).
By fully de-identifying data sets containing PII, researchers can ensure data remains useful while limiting what can be learned about any individual in the data regardless of what third-party information is available.
The individual contests that make up the challenge will include a series of three “sprints” in which participants develop privacy algorithms and compete for prizes, as well as a scoring metrics development contest (A Better Meter Stick for Differential Privacy Contest) and a contest designed to improve the usability of the solvers’ source code (The Open Source and Development Contest).
The Better Meter Stick for Differential Privacy Contest will award a total prize purse of $29,000 for winning submissions that propose novel scoring metrics by which to assess the quality of differentially private algorithms on temporal map data.
The three Temporal Map Algorithms sprints will award a total prize purse of $147,000 over a series of three sprints to develop algorithms that preserve data utility of temporal and spatial map data sets while guaranteeing privacy.
The Open Source and Development Contest will award a total prize purse of $100,000 to teams leading in the sprints to increase their algorithm’s utility and usability for open source audiences.
20% of security professionals described their organizations’ DevSecOps practices as “mature”, while 62% said they are improving practices and 18% as “immature”, a WhiteSource report finds.
The survey gathered responses from over 560 developers and application security professionals in North America and Western Europe about the state of DevSecOps implementation in their organizations.
Reaching full DevSecOps maturity
- In order to meet short deployment cycles, 73% of security professionals and developers feel forced to compromise on security.
- AppSec tools are purchased to ‘check the box’, disregarding developers’ needs and processes, resulting in tools being purchased but not used. Developers don’t fully use the tools purchased by the security team. The more the mature an organization is in terms of its DevSecOps practices, the more AppSec tools they use.
- There is a significant “AppSec knowledge and skills gaps” challenge that is largely neglected by organizations. While 60% of security professionals say they have had an AppSec program in place for at least a year, only 37% of developers surveyed reported that they were not aware of an AppSec program running for longer than a year inside their organization.
- Security professionals’ top challenge is prioritization, but organizations lack the standardized processes to streamline vulnerability prioritization.
“Survey results show that while most security professionals and developers believe that their organizations are in the process of adopting DevSecOps, most organizations still have a way to go, especially when it comes to breaking down the silos separating development at security teams,” said Rami Sass, CEO, WhiteSource.
“Full DevSecOps maturity requires organizations to implement DevSecOps across the board. Processes, tools, and culture need to evolve in order to break down the traditional silos and ensure that all teams share ownership of both security and agility.”
71% of healthcare and medical apps have at least one serious vulnerability that could lead to a breach of medical data, according to Intertrust.
The report investigated 100 publicly available global mobile healthcare apps across a range of categories—including telehealth, medical device, health commerce, and COVID-tracking—to uncover the most critical mHealth app threats.
Cryptographic issues pose one of the most pervasive and serious threats, with 91% of the apps in the study failing one or more cryptographic tests. This means the encryption used in these medical apps can be easily broken by cybercriminals, potentially exposing confidential patient data, and enabling attackers to tamper with reported data, send illegitimate commands to connected medical devices, or otherwise use the application for malicious purposes.
Bringing medical apps security up to speed
The study’s overall findings suggest that the push to reshape care delivery under COVID-19 has often come at the expense of mobile application security.
“Unfortunately, there’s been a history of security vulnerabilities in the healthcare and medical space. Things are getting a lot better, but we still have a lot of work to do.” said Bill Horne, General Manager of the Secure Systems product group and CTO at Intertrust.
“The good news is that application protection strategies and technologies can help healthcare organizations bring the security of their apps up to speed.”
The report on healthcare and medical mobile apps is based on an audit of 100 iOS and Android applications from healthcare organizations worldwide. All 100 apps were analyzed using an array of static application security testing (SAST) and dynamic application security testing (DAST) techniques based on the OWASP mobile app security guidelines.
- 71% of tested medical apps have at least one high level security vulnerability. A vulnerability is classified as high if it can be readily exploited and has the potential for significant damage or loss.
- The vast majority of medical apps (91%) have mishandled and/or weak encryption that puts them at risk for data exposure and IP (intellectual property) theft.
- 34% of Android apps and 28% of iOS apps are vulnerable to encryption key extraction.
- The majority of mHealth apps contain multiple security issues with data storage. For instance, 60% of tested Android apps stored information in SharedPreferences, leaving unencrypted data readily readable and editable by attackers and malicious apps.
- When looking specifically at COVID-tracking apps, 85% leak data.
- 83% of the high-level threats discovered could have been mitigated using application protection technologies such as code obfuscation, tampering detection, and white-box cryptography.
75% of AppSec practitioners and 49% of developers believe there is a cultural divide between their respective teams, according to ZeroNorth.
As digital transformation takes hold, it is increasingly vital that AppSec teams and developers work well together. With DevOps methodology seeing more adoption, teams are delivering software at continually higher velocities. Speed is the culture of DevOps, which often runs counter to the culture of Security – risk adverse and rigid.
The research, conducted by Ponemon Institute, surveyed 581 security practitioners and 549 developers on the cultural divide, its implications, the impact of COVID-19 and teleworking on the divide, and how to bridge the divide.
The findings of the research highlight both the software delivery and security impacts resulting from the cultural divide across AppSec and developer teams. For example, 56% of developers say AppSec stifles innovation.
On the other hand, 65% of AppSec professional believe developers do not care about securing applications early in the software development lifecycle.
Teams not sharing opininon on application risk
Importantly, too, for AppSec and developers to share a culture centered on delivering secure applications, there must be a shared understanding of risk. The teams are not aligned on this front, however. Only 35% of Developers say application risk is increasing; 60% of AppSec professionals believe this to be true.
“As this survey shows, the cultural divide is here today, and will become more exacerbated as organizations move towards DevOps, rendering the traditional, centralized model for security obsolete,” said ZeroNorth CEO, John Worrall.
“We believe this opens the doors for CISOs to become a pillar that supports the bridge between AppSec and development cultures. By enabling a culture that empowers both development and security to execute on their priorities, CISOs can transform the cultures that stifle innovation while significantly improving security.”
“This important research reveals the serious impact the AppSec and Developer cultural divide can have on an organization’s security posture,” said Larry Ponemon, chairman, Ponemon Institute.
“Based on the research findings, we recommend organizations take the following five steps to help bridge the cultural divide: (1) ensure sufficient resources are allocated to ensure applications are secured in the development and production phase of the SDLC, (2) apply application security practices consistently across the enterprise, (3) ensure developers have the knowledge and skill to address critical vulnerabilities in the application development and production life cycle, (4) conduct testing throughout the application development and (5) ensure testing methods scale efficiently from a few to many applications.”
Understanding the cultural divide and its implications
- Developer and AppSec practitioners don’t agree on which function is responsible for the security of applications. 39% of developers say the security team is responsible, while 67% of AppSec practitioners say their teams are responsible.
- AppSec and developer respondents admit working together is challenging, with AppSec respondents saying it is because the developers publish code with known vulnerabilities. Developers say security does not understand the pressure of meeting their deadlines and security stifles their ability to innovate.
- Digital transformation is putting pressure on organizations to develop applications at increasing speeds, which puts security at risk. 65% of developer respondents say they feel the pressure to develop applications faster than before the digital transformation, and 50% of AppSec respondents agree.
- 71% of AppSec respondents say the state of security is undermined by developers who don’t care about the need to secure applications early in the SDLC and 69% say developers do not have visibility into the overall state of application security.
The impact of COVID-19 and teleworking on the cultural divide
- 66% of developers and 72% of AppSec respondents say teleworking is stressful. Only 29% of developers and 38% of AppSec respondents are very confident that teleworkers are complying with organizational security and privacy requirements.
- 74% of AppSec and 47% of developer respondents say their organizations were highly effective at stopping security compromises before COVID-19. After the pandemic started, only one-third of both respondents say their effectiveness is high.
Nearly half of organizations regularly and knowingly ship vulnerable code despite using AppSec tools, according to Veracode.
Among the top reasons cited for pushing vulnerable code were pressure to meet release deadlines (54%) and finding vulnerabilities too late in the software development lifecycle (45%).
Respondents said that the lack of developer knowledge to mitigate issues and lack of integration between AppSec tools were two of the top challenges they face with implementing DevSecOps. However, nearly nine of ten companies said they would invest further in AppSec this year.
The software development landscape is evolving
The research sheds light on how AppSec practices and tools are intersecting with emerging development methods and creating new priorities such as reducing open source risk and API testing.
“The software development landscape today is evolving at light speed. Microservices-driven architecture, containers, and cloud-native applications are shifting the dynamics of how developers build, test, and deploy code. Without better testing, integration, and regular developer training, organizations will put themselves at jeopardy for a significant breach,” said Chris Wysopal, CTO at Veracode.
- 60% of organizations report having production applications exploited by OWASP Top 10 vulnerabilities in the past 12 months. Similarly, seven in 10 applications have a security flaw in an open source library on initial scan.
- Developers’ lack of knowledge on how to mitigate issues is the biggest AppSec challenge – 53% of organizations only provide security training for developers once a year or less. Data shows that the top 1% of applications with the highest scan frequency carry about five times less security debt, or unresolved flaws, than the least frequently scanned applications, which means frequent scanning helps developers find and fix flaws to significantly lower their organization’s risk.
- 43% cited DevOps integration as the most important aspect to improving their AppSec program.
- 84% report challenges due to too many AppSec tools, making DevOps integration difficult. 43% of companies report that they have between 11-20 AppSec tools in use, while 22% said they use between 21-50.
According to ESG, the most effective AppSec programs report the following as some of the critical components of their program:
- Application security is highly integrated into the CI/CD toolchain
- Ongoing, customized AppSec training for developers
- Tracking continuous improvement metrics within individual development teams
- AppSec best practices are being shared by development managers
- Using analytics to track progress of AppSec programs and to provide data to management
Need a tool to check your Python-based applications for security issues? Facebook has open-sourced Pysa (Python Static Analyzer), a tool that looks at how data flows through the code and helps developers prevent data flowing into places it shouldn’t.
How the Python Static Analyzer works
Pysa is a security-focused tool built on top of Pyre, Facebook’s performant type checker for Python.
“Pysa tracks flows of data through a program. The user defines sources (places where important data originates) as well as sinks (places where the data from the source shouldn’t end up),” Facebook security engineer Graham Bleaney and software engineer Sinan Cepel explained.
“Pysa performs iterative rounds of analysis to build summaries to determine which functions return data from a source and which functions have parameters that eventually reach a sink. If Pysa finds that a source eventually connects to a sink, it reports an issue.”
It’s used internally by Facebook to check the (Python) code that powers Instagram’s servers, and do so quickly. It’s used to check developer’s proposed code change for security and privacy issues and to prevent them being introduced in the codebase, as well as to detect existing issues in a codebase.
The found issues are flagged and, depending on their type, the report is send either to the developer or to security engineers to check it out.
“Because we use open source Python server frameworks such as Django and Tornado for our own products, Pysa can start finding security issues in projects using these frameworks from the first run. Using Pysa for frameworks we don’t already have coverage for is generally as simple as adding a few lines of configuration to tell Pysa where data enters the server,” the two engineers added.
The tool’s limitations and stumbling blocks
Pysa can’t detect all security or privacy issues, just data flow–related security issues. What’s more, it can’t detect all data flow–related issues because the Python programming language is very flexible and dynamic (allows code imports, change function call actions, etc.)
Finally, those who use it have make a choice about how many false positives and negatives they will tolerate.
“Because of the importance of catching security issues, we built Pysa to avoid false negatives and catch as many issues as possible. Reducing false negatives, however, may require trade-offs that increase false positives. Too many false positives could in turn cause alert fatigue and risk real issues being missed in the noise,” the engineers explained.
The number of false positives can reduced by using sanitizers and manually added and automatic features.
Security researchers have analyzed contact-tracing mobile apps from around the globe and found that their developers have generally failed to implement suitable security and privacy protections.
The results of the analysis
In an effort to stem the spread of COVID-19, governments are aiming to provide their citizenry with contact-tracing mobile apps. But, whether they are built by a government entity or by third-party developers contracted to do the job, security has largely taken a backseat to speed.
Guardsquare researchers have unpacked and decompiled 17 Android contact-tracing apps from 17 countries to see whether developers implement name obfuscation, string, asset/resource and class encryption. They’ve also checked to see whether the apps will run on rooted devices or emulators (virtual devices).
- Only 41% of the apps have root detection
- Only 41% include some level of name obfuscation
- Only 29% include string encryption
- Only 18% include emulator detection
- Only 6% include asset / resource encryption
- Only 6% include class encryption.
The percentages vary according to region (see above). Grant Goodes, Chief Scientist at Guardsquare, though made sure to note that they have not checked all existing contact-tracing apps, but that the sample they did test “provides a window into the security flaws most contact tracing apps contain.”
Security promotes trust
The looked-for protections should make it difficult for malicious actors to tamper with and “trojanize” the legitimate apps.
Name obfuscation, for example, hides identifiers in the application’s code to prevent hackers from reverse engineering and analyzing source code. String encryption prevents hackers from extracting API keys and cryptographic keys included in the source code, which could be used by attackers to decrypt sensitive data (for identity theft, blackmailing, and other purposes), or to spoof communications to the server (to disrupt the contact-tracing service).
Asset/resource encryption should prevent hackers from accessing/reusing files that the Android OS uses to render the look and feel of the application (e.g., screen-layouts, internationalized messages, etc.) and custom/low-level files that the application may need for its own purposes.
These security and privacy protections are important for every mobile app, not just contact-tracing apps, Goodes noted, but they are particularly salient for the latter, since some of them are mandatory for citizens to use and since their efficacy hinges on widespread adoption.
“When security flaws are publicized, the whole app is suddenly distrusted and its utility wanes as users drop off. In the case of countries who build their own apps, this can erode citizen trust in the government as well, which further increases public health risks,” he added.
Applications are a gateway to valuable data, so it’s no wonder they are one of attackers’ preferred targets.
And since modern applications aren’t a monolithic whole but consist of many separate components “glued together” over networks, attackers have at their disposal many “doors” through which they can attempt access to the data.
Easy targets will remain popular
Some of these doors are more popular than others. According to the latest Application Protection Report by F5 Networks, attackers love to:
“PHP is a widespread and powerful server-side language that’s been used in 80% of sites on the web since 2013. It underpins several of the largest web applications in the world, including WordPress and Facebook,” F5 analysts explained the attraction.
2. Engage in injection attacks and formjacking (the latter especially when targeting the retail sector).
In 2019, formjacking payment cards was resposible for 87% of web breaches and 17% of known breaches in total (up from 71% and 12% in 2018). In 2019, the retail sector was the most significant formjacking target. 81% percent of retail breaches were from formjacking attacks, while nearly all other sectors tended to be breached most often through the access tier.
“The lesson is clear: for any organization that accepts payment card via the web, their shopping cart is a target for cyber-criminals,” the analysts pointed out.
3. Getting access to accounts (and especially email accounts) via phishing, brute forcing, credential stuffing or using stolen credentials.
“Access tier attacks are any that seek to circumvent the legitimate processes of authentication and authorization that we use to control who gets to use an application, and how they can use it. The result of this kind of attack is a malicious actor gaining entry to a system while impersonating a legitimate user. They then use the legitimate user’s authorization to accomplish a malicious goal— usually data exfiltration,” the analysts explained.
Attackers use a number of tactics to keep these attacks unnoticed, but organizations also have a lot of defensive options at their disposal to prevent them.
4. Go after unmonitored, vulnerable, poorly secured or misconfigured APIs.
“In the days of monolithic apps, whatever core business logic generated value needed to be supported by a user interface, storage, and other meta-functions. Now it is sufficient to develop a single specialized service, and use APIs to either outsource other functions to bring an app to market, offer the service to other app owners, or both,” the analysts explained.
Their widespread used makes them a big target, and a combination of factors make them rich targets:
- They are often configured with overly broad permissions
- Lack of visibility and monitoring.
There are solutions to these problems
Attackers go where the data is, and that’s why organizations in each sector/industry should develop risk-based security programs and tailor controls and architecture to reflect the threats they actually face, the analysts advise.
To counter access attacks, organizations should implement multi-factor authentication where fitting and possible, but should also consider:
- Checking passwords against a dictionary of default, stolen, and well-known passwords
- Making sure the system can detect and prevent brute force attacks by, for example, using CAPTHA, slowing down sessions, setting up alarms, etc.
- Creating simple methods for users to report suspected phishing
- Encrypting or eliminating confidential data from the organization’s email caches
- Enabling logging (to be able to discover what the attackers did when they gained access).
Spotting and foiling injection and formjacking attacks can be done with securing servers, patching injection vulnerabilities,employing change control, using web application firewalls (WAFs), through testing and watching of all third-party components on sites with forms accepting critical information, and so on.
But organizations should be aware that the injection landscape is constantly changing, and they have to follow the trends and adapt.
Finally, organizations can mitigate the risk of API attacks by:
- Making (and maintaining) an inventory of their APIs
- Deploying authentication for them and storing credentials securely
- Limiting their permissions
- Monitoring them (by logging connections and reviewing them)
- Encrypting the API connections
- Testing APIs
- Implementing API security tools.
To help individuals and organizations choose video call apps that suit their needs and their risk appetite, Mozilla has released a new “Privacy Not Included” report that focuses on video call apps.
The report includes the following popular offerings:
- Zoom’s Zoom app
- Google’s Duo, Hangouts, and Meet
- Apple’s FaceTime
- Microsoft’s Skype and Teams
- Facebook’s Messenger, Messenger Kids, and WhatsApp
- Epic Games’ Houseparty
- Discord’s Discord app
- 8×8’s Jitsi Meet
- Signal Technology Foundation’s Signal
- Verizon’s BlueJeans
- LogMeIn’s GoToMeeting
- Cisco’s WebEx
- Doxy.me’s Doxy.me telemedicine app
The report is based on Mozilla’s researchers reviewing the app’s privacy policies and specifications, which user controls it offers, etc.
Each app is given an overall security rating, based on five things:
- Whether it uses encryption (and what kind of encryption)
- Whether it requires the use of strong passwords
- Whether it provides automatic security updates
- Whether the developers manage security vulnerabilities using tools like bug bounty programs and clear points of contact for reporting vulnerabilities.
Three of the evaluated apps have failed to meet Mozilla’s Minimum Security Standards, but that doesn’t mean that they should not be used. Different users have different needs and wants, and that includes those related to security and privacy.
Mozilla noted that many of the apps provide admirable privacy and security features and that all apps use some form of encryption (though not all encryption is end-to-end). Still, some apps – like Doxy.me – offer inadequate protection, especially when you consider the extremenly sensitive health information that is usually shared through it.
Making a choice
Consumers and organizations should review Mozilla’s findings and decide for themselves which solution is right for them. I would also advise checking similar research reports and mentions, which may include additional offerings and point out other qualities that one may search for in a solution (e.g., whether it supports self-hosting) or traits one may avoid.
Mozilla’s researchers also pointed out that different apps have very different set of video chat features, making some more fitting for enterprise use and other a more natural choice for consumers. Business users who want a fuller set of features and a higher level of security and have money to pay should look to business-focused apps, they noted.
Ashley Boyd, Mozilla’s Vice President, Advocacy, pointed out that, with a record number of people using video call apps to conduct business, teach classes, and catch up with friends, it’s more important than ever that this technology be trustworthy.
We have witnessed how Zoom moved to quickly patch security flaws reported by researchers and how the addition of new, helpful features has been copied by competitors (e.g., Zoom and Google Hangouts offered one-click links to get into meetings, and Skype recently followed suit).
“The good news is that the boom in usage has put pressure on these companies to improve their privacy and security for all users, which should be a wake-up call for the rest of the tech industry,” Boyd concluded.
Security testing data is “the unsung hero” of securing application development. It’s the backbone of application development quality, compliance and risk management, and rests on the three fundamental pillars of security:
- Confidentiality (the data is protected from unauthorized access)
- Integrity (the data can’t be/hasn’t been tampered with)
- Availability (the data is consistently available to support your business).
When you set out to design an application, you want to make sure it behaves as intended. In other words, that it does what you want, when it’s supposed to, and that it does so consistently.
Application security is, in a way, the antithesis of this. It’s designing, building and testing applications to ensure that your application doesn’t do anything you don’t want it to do, such as crashing repeatedly or, worse, giving up information. Simply put, it’s the “it shouldn’t do anything else” part of your application.
Both application vulnerabilities and security are deeply entrenched into AppDev.
Security vulnerabilities are functions of the application’s design — either you used a library in building the application that was insecure, or you coded an application that was in some way insecure, or you have fundamental architectural security flaws.
The same principle holds with the latter, application security, in that it’s central to the design process. You have to design the app securely, build it securely and test it to verify that you did everything right. This is where penetration testing comes into play. It’s never the answer to application security, but rather the verification of a program.
Ultimately, there’s no magic pill and doing one without the other can leave you exposed.
Overcoming the obstacles
In a world where time is money, convincing people that application security is important is not always easy. The rush to get an application to market can trump the need to integrate security into the development process. Not only are you up against getting people to commit to doing it, but there’s also the matter of understanding how to do it and training your developers to do it well.
So how does a CISO shift the mindset from “security is security’s job and security’s alone” to it being a corporate responsibility? It comes down to executive sponsorship – the decision-makers have to decide that it matters.
The first step is educating the senior staff on the nature of the data you store, how you’re using it and the risk that entails. From there, it’s much easier to sell appropriate defenses. You must be able to say we have this data, here’s how we use it and here’s how we should defend it, and, just as importantly, here’s what can happen if we don’t.
Picking the right partner
When selecting an AppSec partner, look for a company that regards the application development cycle holistically from design to production to retirement.
Secondly, ask if they are committed to working in alignment with your security goals as they relate to those affecting the company at a macro level. Application security is a risk, but businesses face many other risks and they need to be balanced.
Finally, your partner should be focused on the ability to measure success. Is the program successful and are the apps becoming more secure? If the company’s not showing you how to measure it, then you’re not going to be able manage it and it’s probably going to fail.
Just do it
The good news is that when it comes to application security, it’s not brain surgery. The technology is not really that complex. In fact, it’s fairly predictable. You don’t even need to come up with anything new — OWASP, for instance, has some excellent resources.
The challenge lies in doing application security well over a large organization. With 10 developers, it’s pretty straightforward, but with a 1,000 it gets complicated. Being consistent (remember those three pillars from earlier) makes it hard and the larger an enterprise is, the harder it is to achieve. In the end, it comes down to making the choice to do it and then actually doing it, and doing it well.
There’s an intrinsic link between developer happiness and application security hygiene, and an alarming level of application breaches, according to Sonatype.
For the first time ever, the findings prove the correlation between developer happiness and application security hygiene, with happy developers 3.6x less likely to neglect security when it comes to code quality. Happy developers are also 2.3x more likely to have automated security tools in place, and 1.3x more likely to follow open source security policies.
In addition, the findings showed that developers working within mature DevOps practices are 1.5x more likely to enjoy their work, and 1.6x more likely to recommend their employer to prospects, highlighting the significant role DevSecOps transformations play in both application security and developers’ job satisfaction.
The study also revealed that 28% of mature organizations are aware of an open source component-related breach in the past 12 months, compared to 19% of respondents with immature DevOps practices.
The importance of mature DevOps practices
While breaches appear higher for mature DevOps practices, industry advocates point to cultural differences that reward open communication, welcome new information, and encourage tighter collaboration between developer and security tribes.
“Developer happiness based on mature DevOps practices is fundamental to the quality and delivery of secure software,” said Derek Weeks, Vice President at Sonatype.
“By introducing mature DevOps practices, businesses can not only innovate faster, they can enhance their development teams’ job satisfaction, and ultimately differentiate themselves as employers – critical when so many companies face significant skills shortages and increased competition.
Development velocity is accelerating rapidly
55% of respondents deploy code to production at least once per week, compared to 47% of respondents in 2019. As year over year velocity increased, 47% developers continued to admit that while security was important, but they did not have time to spend on it – a finding consistent with the same survey in 2018 (48%) and 2019 (48%).
High automated security investments
Automated security investments are highest, with open source governance (44%), web application firewalls (59%), and intrusion detection (42%).
The greatest differences in investment priorities between mature and immature DevOps programs are seen across Container Security, with mature practices investing 2.2x more than immature practices; this is closely followed by investments in Dynamic Analysis (DAST) and Software Composition Analysis (SCA), with 2.1x and 1.9x more respectfully.
Organizations face major infrastructure and security challenges in supporting multi-cloud and edge deployments, according to a Volterra survey of more than 400 IT executives.
The survey reveals that multi-cloud deployments are being driven primarily by a need to maximize availability and reliability for applications, while at the edge IoT is the top use case driving deployments.
However, multi-cloud deployments are threatened by security and connectivity problems due to differences between cloud providers, as well as operational challenges in managing workloads across several clouds. Meanwhile, edge deployments suffer from an inability to meet unique infrastructure needs as well as difficulties in managing apps across different edge sites.
“The increasing deployment of technologies including AI, machine learning and IoT are causing apps and data to be increasingly spread across multiple clouds and edge sites. This is leading to a number of serious operational and security challenges for organizations trying to support multi-cloud and edge deployments,” said Ankur Singla, CEO, Volterra.
“In this survey, we found 70% of IT leaders think it’s ‘very important’ to have a consistent operational experience between the edge and public and private clouds. But as the data shows, there are tremendous issues preventing this within edge sites and multiple clouds.”
Benefits and barriers in multi-cloud deployments
97% of IT leaders surveyed indicated that they are planning to distribute workloads across two or more clouds. Respondents identified three key reasons for putting the same workloads at multiple cloud providers:
- Maximizing availability and reliability (63%)
- Meeting regulatory and compliance requirements (47%)
- Leveraging best-of-breed services from each provider (42%)
Multi-cloud deployments yield better availability and reliability by ensuring that if one cloud happens to go down, the app will still be available in another cloud. It’s also advantageous for regulatory and compliance reasons as it allows organizations to keep an app’s data in a specific geographic region if local law mandates it.
Finally, multi-cloud enables organizations to leverage the unique advantages of each cloud, such as Google Cloud Platform’s strength in machine learning or Microsoft Azure’s seamless integration with Office 365 databases.
But major issues with security, connectivity reliability and performance, and inconsistent service offerings make it difficult to efficiently deploy and operate multi-cloud deployments. When asked about the biggest challenges in managing workloads across different cloud providers, IT leaders highlighted as the top problems:
- Secure and reliable connectivity between providers (60%)
- Different support and consulting processes (54%)
- Different platform services (53%)
Furthermore, respondents indicated that their biggest challenges when connecting between cloud providers for a shared workload are security (54%), reliability (44%), and performance (39%).
Edge cloud adoption and challenges
Propeller Insight’s survey data around edge computing shows that organizations are deploying apps at the edge to primarily support IoT (57%), smart manufacturing (52%) and content delivery (46%).
Respondents explained that their organizations are putting these workloads at the edge rather than public or private clouds because they need to control and analyze data for these use cases locally (54%) and there’s too much latency when sending edge data to public cloud-based apps (47%).
However, edge deployments also face serious challenges, with managing infrastructure and apps across numerous edge sites posing potential barriers to success. When asked to identify the biggest business concerns about having apps at the edge, IT execs pointed to:
- The difficulty in managing apps across multiple edge locations (44%)
- An inability to accommodate the IT infrastructure needed to host and operate at edge (38%)
Furthermore, when asked to described the more specific technical challenges at the edge, respondents called out the difficulty of integrating cloud-native workflows like automation, CI/CD and performance management (69%) and trouble installing a full set of application infrastructure (compute/storage/network/security) (67%).
The survey also looked at the challenges of managing edge deployments over the longer term, revealing that the two biggest challenges to operating edge apps for their entire life cycle are:
- The lack of resources or time to keep applications and infrastructure up-to-date (37%)
- Managing distributed clusters as siloed instances rather than a single resource (26%)
“There are a few key themes that jump out from the data and illustrate why organizations are struggling with multi-cloud and edge deployments,” said Ankur Singla.
“For multi-cloud deployments, the biggest challenges are security, connectivity and operations. There simply isn’t enough visibility across cloud platforms and it’s impossible for organizations to establish consistent policies or a common operational experience.
“For edge deployments, the biggest challenges are accommodating infrastructure needs and managing apps across different edge sites.”
“These issues reflect the major headaches that come from trying to manage apps distributed across multiple clouds or disparate edge sites with the current tools available. The status quo simply won’t work any longer.
“Organizations need a way to manage all these components as a single, distributed cloud to effectively leverage multi-cloud and edge deployments and the data within them,” said Singla.
Like it or not, cybercrime is big business these days. A casual glance at the news at any given time will typically reveal several new breaches, usually involving eye-watering amounts of personal or sensitive information stolen. As such, any executive board worth its salt should have long realized the importance of robust cyber defenses.
Sadly, even in the face of mounting evidence, this isn’t always the case. Often business priorities are given precedence over security priorities, particularly when optimal security practices risk interfering with business efficiency or overall productivity. Underfunding is another common concern for many CSOs and CISOs, with the board simply not prepared to give them the budget and/or resources they truly need to keep the business safe.
Businesses need to think long term
Underfunding security in order to boost other areas of the business may seem like a good idea in the short term, but it’s a big risk that can come back to bite senior executives pretty spectacularly if they aren’t careful. For example, while an additional £500,000 towards new security resources may not seem viable during annual budgeting cycles, it pales in comparison to the millions of pounds worth of fines, legal costs and mitigation expenses many organizations are faced with in the aftermath of a breach.
Just ask British Airways, which has been hit with a record £183 million fine from the Information Commissioner’s Office (ICO), following what it described as a “sophisticated, malicious criminal attack” on its website, during which details of about 500,000 customers were harvested.
Examples like this highlight just how important it is to ensure long term security and compliance by implementing cybersecurity practices that prevent such data breaches from happening in the first place. A more proactive approach to integrating cybersecurity practices into the wider business strategy can go a long way towards protecting against data loss, as well as empowering security teams with the ability to respond much more swiftly and precisely to any threats that do present themselves.
With more and more organizations now relying on software applications to grow their business, properly securing these applications is becoming absolutely essential. A great way to do this is by adopting a systematic, risk-based approach to evaluating and addressing cybersecurity vulnerabilities earlier in the software development life cycle (SDLC), rather than trying to do it after the fact.
Business and security objectives must be aligned
The most effective security approaches are the ones that have been properly aligned with those of the wider organization. But all too often, the idea of building security into the SDLC is reconsidered the moment it’s deemed to be having a detrimental impact on development times or release windows.
When the time needed to remediate a vulnerability threatens to delay the release of an important application, pressure quickly starts building on the security team. If it can’t make a compelling business case to delay release in order to fix the issue, it can quickly find itself on the outside looking in.
The role of risk in effective security decision making
In situations like the one above, security teams need to be able to quickly make senior decision makers recognize the stakes involved and the potential consequences of not fixing the vulnerability. This requires both a solid understanding of the app’s intended business purpose and an ability to frame the argument in a way decision-makers will understand, rather than drowning them in security jargon. One of the best ways to do this is with a risk-based approach, which has two main stages.
Stage one involves taking a comprehensive inventory of all web applications currently in development and putting a stringent monitoring process in place to quickly identify vulnerabilities. It’s critical to be thorough during this stage because if just one application is missed, or one system left unsecured, it creates a new potential access point for cybercriminals.
With stage one completed then stage two can begin, which incorporates business impact into the strategic planning process. By properly defining the potential losses that could occur from a specific vulnerability and helping senior executives understand them in plain terms, not only does it help drive home the need for effective security, it allows for much finer tuning of activities based on the level of risk they present to the overall organization.
Taking a SaaS-based approach to application scanning
Adopting a SaaS-based approach to application scanning throughout the SDLC allows security teams to continuously assess risk during the production process, rather than just at a handful of milestones. As a result, when combined with proper prioritization of activities, a much more accurate risk profile can be created than would otherwise be possible, which all levels of the company can buy into.
When it comes to effective security, it’s important for security teams to speak a language the whole organization understands. Taking a risk-based approach does this, translating often complex vulnerabilities and analysis into terms that are meaningful to all, and particularly to the senior executives. Then proper discussions to take place, leading to mutual decisions that benefit the company as a whole and keep it protected from the plethora of cyber threats out there.
Google Apps is a service offered by Google that involves providing independent and customised versions of a number of products with a custom domain name. It includes many Web applications with functions similar to conventional office suites, such as Gmail, Google calendar, G-Talk, Google Groups, Google Sites and Google Docs.
Google Apps can be accessed for free and provides the same storage capacity as any other regular Gmail account. Any enterprise requiring additional storage for e-mail can purchase Google Apps for Business for an annual fee per user account.
5 Security Considerations To Consider When Coding
1. Input Checking
Always check user input to be sure that it is what you expected. Make sure it doesn’t contain characters or other data which may be treated in a special way by your program or any programs called by your program.This often involves checking for characters such as quotes, and checking for unusual input characters such as non-alphanumeric characters where a text string is expected. Often, these are a sign of an attack of some kind being attempted.
Always check the ranges when copying data, allocating memory or performing any operation which could potentially overflow. Some programming languages provide range-checked container access (such as the std::vector::at() in C++, but many programmers insist on using the unchecked array index  notation. In addition, the use of functions such as strcpy() should be avoided in preference to strncpy(), which allows you to specify the maximum number of characters to copy. Similar versions of functions such as snprintf() as opposed to sprintf() and fgets() instead of gets() provide equivalent length-of-buffer specification. The use of such functions throughout your code should prevent buffer overflows. Even if your character string originates within the program, and you think you can get away with strcpy() because you know the length of the string, that doesn’t mean to say that you, or someone else, won’t change things in the future and allow the string to be specified in a configuration file, on the command-line, or from direct user input. Getting into the habit of range-checking everything should prevent a large number of security vulnerabilities in your software.
3.Principle Of Least Privileges
This is especially important if your program runs as root for any part of its runtime. Where possible, a program should drop any privileges it doesn’t need, and use the higher privileges for only those operations which require them. An example of this is the Postfix mailserver, which has a modular design allowing parts which require root privileges to be run distinctly from parts which do not. This form of privilege separation reduces the number of attack paths which lead to root privileges, and increases the security of the entire system because those few paths that remain can be analysed critically for security problems.
A race condition is a situation where a program performs an operation in several steps, and an attacker has the chance to catch it between steps and alter the system state. An example would be a program which checks file permissions, then opens the file. Between the permission check the stat() call and the file open the fopen() call an attacker could change the file being opened by renaming another file to the original files name. In order to prevent this, fopen() the file first, and then use fstat(), which takes a file descriptor instead of a filename. Since a file descriptor always points to the file that was opened with fopen(), even if the filename is subsequently changed, the fstat() call will be guaranteed to be checking the permissions of the same file. Many other race conditions exist, and there are often ways to prevent them by carefully choosing the order of execution of certain functions.
5.Register Error Handlers
Many languages support the concept of a function which can be called when an error is detected, or the more flexible concept of exceptions. Make use of these to catch unexpected conditions and return to a safe point in the code, instead of blindly progressing in the hope that the user input won’t crash the program, or worse!
Application Security Services Overview
Effectively assess, manage, and secure your organization’s web usage and business-critical applications using our Application Security Services.
Application security encompasses measures taken throughout the code’s life-cycle to prevent gaps in the security policy of an application or the underlying system(vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application.
Applications only control the kind of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security.
The Application Security Model used can vary. Generally, the choices are between using one of the following application security models.
- Database Role Based
- Application Role Based
- Application Function Based
- Application Role And Function Based
- Application Table Based
The choice depends particularly on what needs to be tested.
[button_6 bg=”green” text=”style5_nextstep.png” align=”center” href=”https://itsecurity.org/contact-us” new_window=”Y”/]
What Can We Test For?
[two_column_block style=”1″] [content1]
Threats / Attack
[two_column_block style=”1″] [content1]
Buffer overflow; cross-site scripting; SQL injection; canonicalization
[two_column_block style=”1″] [content1]
Attacker modifies an existing application’s runtime behavior to perform unauthorized actions; exploited via binary patching, code substitution, or code extension
[two_column_block style=”1″] [content1]
Network eavesdropping ; Brute force attack; dictionary attacks; cookie replay; credential theft
[two_column_block style=”1″] [content1]
Elevation of privilege; disclosure of confidential data; data tampering; luring attacks
[two_column_block style=”1″] [content1]
Unauthorized access to administration interfaces; unauthorized access to configuration stores; retrieval of clear text configuration data; lack of individual accountability; over-privileged process and service accounts
[two_column_block style=”1″] [content1]
Sensitive Data And Information
Access sensitive code, data or information in storage; network eavesdropping; code/data tampering
[two_column_block style=”1″] [content1]
Session hijacking; session replay; man in the middle attack
[two_column_block style=”1″] [content1]
Poor key generation or key management; weak or custom encryption
[two_column_block style=”1″] [content1]
Query string manipulation; form field manipulation; cookie manipulation; HTTP header manipulation
[two_column_block style=”1″] [content1]
Information disclosure; denial of service attacks
[two_column_block style=”1″] [content1]
Auditing and Logging
User denies performing an operation; attacker exploits an application without trace; attacker covers his or her tracks