Traditional password-based security might be headed for extinction, but that moment is still far off.
In the meantime, most of us need something to prevent our worst instincts when it comes to choosing passwords: using personal information, predictable (e.g., sequential) keystroke patterns, password variations, well-known substitutions, single words from a dictionary and – above all – reusing the same password for many different private and enterprise accounts.
What does a modern password policy look like?
While using unique passwords for every account is a piece of advice that has withstood the test of time (though not the test of widespread compliance), people also used to be told that they should use a mix of letters, numbers and symbols and to change it every 90 days – recommendations that the evolving threat landscape has made obsolete and even somewhat harmful.
In the past decade, academic research on the topic of password practices and insights gleaned from passwords compromised in breaches have revealed what people were actually doing when they were creating passwords. This helped unseat some of the prevailing password policies that were in place for so long, Josh Horwitz, Chief Operations Officer of Enzoic, told Help Net Security.
The latest NIST-sanctioned advice regarding enterprise password policies (as delineated in NIST Special Publication 800-63B) includes, among other things, the removal of the requirement for character composition rules and for mandatory periodic password changes. Those are recommendations that are also being promulgated by Microsoft.
As data breaches now happen every single day and attackers are trying out the revealed passwords on different accounts in the hope that the user has reused them, NIST also advises companies to verify that passwords are not compromised before they are activated and check their status on an ongoing basis, against a dynamic database comprised of known compromised credentials.
The need for modern tools
But the thing is, most older password policy tools don’t provide a method to check if a password is strong and not compromised once the password is chosen/set.
There’s really only one that both checks the passwords at creation and continuously monitors their resilience to credential stuffing attacks, by checking them against a massive (7+ billion) database of compromised credentials that is updated every single day.
“Some organizations will gather this information from the dark web and other places where you can get lists of compromised passwords, but most tools aren’t designed to incorporate it and it’s still a very manual process to try to keep that information up to date. It’s effectively really hard to maintain the breadth and frequency of data updates that are required for this approach to work as it should,” Horwitz noted.
But for Enzoic, this is practically one of its core missions.
“We have people whose full-time job is to go out and gather threat intelligence, databases of compromised passwords, and cracking dictionaries. We’ve also invested substantially in proprietary technology to automate that process of collection, cleansing and indexing of that information,” he explained.
“Our database is updated multiple times each day, and we’re really getting the breadth of data out there, by integrating both large and small compromised databases in our list – because hackers will use any database they can get their hands on, not just those stolen in well-publicized data breaches.”
Enzoic for Active Directory
This constantly updated list/database is what powers Enzoic for Active Directory, a tool (plug-in) that integrates into Active Directory and enforces additional password rules to prevent users from using compromised credentials.
The solution checks the password both when it’s created and when it’s reset and checks it daily against this real-time compromised password database. Furthermore, it does so automatically, without the IT team having to do anything except set it up once.
Enzoic for AD is able to detect and prevent the use of:
- Fuzzy variations of compromised passwords
- Unsafe passwords consisting of an often-used root word and a few trailing symbols and numbers
- New passwords that are too similar to the one the user previously used
- Passwords that employees at specific organizations are expected to choose (this is accomplished by using a custom dictionary that can be tailored to each organization)
The tool uses a standard password filter object to create a new password policy that works anywhere that defers to Active Directory, including Azure AD and third-party password reset tools.
Can multi-factor authentication save us?
Many will wonder whether such a tool is really crucial for keeping AD accounts safe. “What if we also use multi-factor authentication? Doesn’t that solve our authentication problems and keeps us safe from attacks?”
In reality, password remain part in every environment, and not every authentication event includes multi-factor authentication (MFA).
“You can offer MFA, but until you actually require its use and get rid of the password, there’s always going to be doors in that the attackers can use,” Horwitz pointed out.
“NIST also makes it very clear that authentication security should include multiple layers, and that each of these layers – including the password layer – need to be hardened.”
Do you really need Enzoic for Active Directory?
Enzoic has made it easy for enterprises to check whether some of the AD passwords used by their employees are weak or have been compromised: they can deploy a free password auditing tool (Enzoic for Active Directory Lite) to take a quick snapshot of their domain’s password security state.
“Some password auditing tools take long time to try to brute-force passwords, but attackers are much more likely to start their efforts with compromised passwords,” Horwitz added.
“Our tool takes just minutes to perform the audit, it’s simple to run, and allows IT and IT security leaders and professionals to realize the extent of the problem and to easily communicate the issue to the business side.”
Enzoic for Active Directory is likewise simple to install and use, and is built for easy implementation and automatic maintenance of the modern password policy.
“It’s a low complexity tool, but this is where it really shines: it allows you to screen passwords against a massive database of compromised passwords that gets updated every day – and allows you to do this at lightning speed, so that it can be done at the time that the password is being created without any friction or interruption to the user – and it rechecks that password each day, to detect when a password is no longer secure and trigger/mandate a password change.“
Aside from checking the passwords against this constantly updated list, it also prevents users from using:
- Common dictionary words or words that are often used for passwords (e.g., names of sports teams)
- Expected passwords and those that are too similar to users’ old password
- Context-specific passwords and variations (e.g., words that are specific to the business the enterprise is in, or words that employees living in a specific town or region might use)
- User-specific passwords and variations (e.g., their first name, last name, username, email address – based on those field values in Active Directory)
Time and time again, it has been proven that if left to their own devices, users will employ predictable patterns when choosing a password and will reuse one password over multiple accounts.
When the compromised account doesn’t hold sensitive information or allows access to sensitive assets, these practices might not lead to catastrophic results for the user. But the stakes are much higher when it comes to enterprise accounts, and especially Active Directory accounts, as AD is most companies’ primary solution for access to network resources.
Traditional endpoint detection and response (EDR) solutions focus only on endpoint activity to detect attacks. As a result, they lack the context to analyze attacks accurately.
In this interview, Sumedh Thakar, President and Chief Product Officer, illustrates how Qualys fills the gaps by introducing a new multi-vector approach and the unifying power of its Cloud Platform to EDR, providing essential context and visibility to the entire attack chain.
How does Qualys Multi-Vector EDR differ from traditional EDR solutions?
Traditional EDR solutions focus only on endpoint activity, which lacks the context necessary to accurately analyze attacks and leads to a high rate of false positives. This can put an unnecessary burden on incident response teams and requires the use of multiple point solutions to make sense of it all.
Qualys Multi-Vector EDR leverages the strength of EDR while also extending the visibility and capabilities beyond the endpoint to provide a more comprehensive approach to protection. Multi-Vector EDR integrates with the Qualys Cloud Platform to deliver vital context and visibility into the entire attack chain while dramatically reducing the number of false positives and negatives as compared with traditional EDR.
This integration unifies multiple context vectors like asset discovery, rich normalized software inventory, end-of-life visibility, vulnerabilities and exploits, misconfigurations, in-depth endpoint telemetry and network reachability all correlated for assessment, detection and response in a single app. It provides threat hunters and incident response teams with crucial, real-time insight into what is happening on the endpoint.
Vectors and attack surfaces have multiplied. How do we protect these systems?
Many attacks today are multi-faceted. The suspicious or malicious activity detected at the endpoint is often only one small part of a larger, more complex attack. Companies need visibility across the environment to effectively fully understand the attack and its impact on the endpoint—as well as the potential consequences elsewhere on their network. This is where Qualys’ ability to gather and assess the contextual data on any asset via Qualys Global IT Asset Inventory becomes so important.
The goal of EDR is detection and response, but you need a holistic view to do it effectively. When a threat or suspicious activity is detected, you need to act quickly to understand what the information or indicator means, and how you can pivot to take action to prevent any further compromise.
How can security teams take advantage of Qualys Multi-Vector EDR?
Attack prevention and detection are two sides of the same coin for security teams. With current endpoint tools focusing solely on endpoint telemetry, security teams end up bringing in multiple point solutions and threat intelligence feeds to figure out what is happening in their environment.
On top of it, they need to invest their budget and time in integrating these solutions and correlating data for actionable insights. With Qualys EDR, security teams can continuously collate asset telemetry such as process, files and hashes to detect malicious activities and correlate with natively integrated threat intel for prioritization score-based response actions.
Instead of reactively taking care of malicious events one endpoint at a time, security teams can easily pivot to inspect other endpoints across the hybrid infrastructure for exploitable vulnerabilities, MITRE-based misconfigurations, end-of-life or unapproved software and systems that lack critical patches.
Additionally, through native workflows that provide exact recommendations, security and IT teams can patch or remediate the endpoints for the security findings. This is an improvement over previous methods which require handshaking of data from one tool to another via complex integrations and manual workflows.
For example, Qualys EDR can help security teams not only detect MITRE-based attacks and malicious connections due to RDP (remote desktop) exploitation but can also provide visibility across the infrastructure. This highlights endpoints that can connect to the exploited endpoint and have RDP vulnerabilities or a MITRE-mapped configuration failure such as LSASS. Multi-Vector EDR then lets the user patch vulnerabilities and automatically remediate misconfigurations.
Thus, Qualys’ EDR solution is designed to equip security teams with advanced detections based on multiple vectors and rapid response and prevention capabilities, minimizing human intervention, simplifying the entire security investigation and analyze processes for organizations of all sizes. Security practitioners can sign up for a free trial here.
What response strategies does Qualys Multi-Vector EDR use?
Qualys EDR with its multi-layered, highly scalable cloud platform, retains telemetry data for active and historical view and natively correlates it with multiple external threat intelligent feeds. This eliminates the need to rely on a single malware database and provides a prioritized risk-based threat view. This helps security teams hunt for the threats proactively and reactively with unified context of all security vectors, reducing alert fatigue and helping security teams concentrate on what is critical.
Qualys EDR provides comprehensive response capabilities that go beyond traditional EDR options, like killing process and network connections, quarantining files, and much more. In addition, it uniquely orchestrates responses such as preventing future attacks by correlating exploitable-to-malware vulnerabilities automatically, patching endpoints and software directly from the cloud and downloading patches from the vendor’s website, without going through the VPN bandwidth.
91 percent of people know that using the same password on multiple accounts is a security risk, yet 66 percent continue to use the same password anyway. IT security practitioners are aware of good habits when it comes to strong authentication and password management, yet often fail to implement them due to poor usability or inconvenience.
To select a suitable password management solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Simran Anand, Head of B2B Growth, Dashlane
An organization’s security chain is only as strong as its weakest link – so selecting a password manager should be a top priority among IT leaders. While most look to the obvious: security (high grade encryption, 2FA, etc.), support, and price, it’s critical to also consider the end-user experience. Why? Because user adoption remains by far IT’s biggest challenge. Only 17 percent of IT leaders incorporate the end-UX when evaluating password management tools.
It’s not surprising, then, that those who have deployed a password manager in their company report only 23 percent adoption by employees. The end-UX has to be a priority for IT leaders who aim to guarantee secure processes for their companies.
Password management is too important a link in the security chain to be compromised by a lack of adoption (and simply telling employees to follow good password practices isn’t enough to ensure it actually happens). For organizations to leverage the benefits of next-generation password security, they need to ensure their password management solution is easy to use – and subsequently adopted by all employees.
Gerald Beuchelt, CISO, LogMeIn
As the world continues to navigate a long-term future of remote work, cybercriminals will continue to target users with poor security behaviors, given the increased time spent online due to COVID-19. Although organizations and people understand that passwords play a huge role in one’s overall security, many continue to neglect best password practices. For this reason, businesses should implement a password management solution.
It is essential to look for a password management solution that:
- Monitors poor password hygiene and provides visibility to the improvements that could be made to encourage better password management.
- Standardizes and enforces policies across the organization to support proper password protection.
- Provides a secure password management portal for employees to access all account passwords conveniently.
- Reports IT insights to provide a detailed security report of potential threats.
- Equips IT to audit the access controls users have with the ability to change permissions and encourage the use of new passwords.
- Integrates with previous and existing infrastructure to automate and accelerate workflows.
- Oversees when users share accounts to maintain a sense of security and accountability.
Using a password management solution that is effective is crucial to protecting business information. Finding the right solution will not only help to improve employee password behaviors but also increase your organization’s overall online security.
Michael Crandell, CEO, Bitwarden
Employees, like many others, face the daily challenge of remembering passwords to securely work online. A password manager simplifies generating, storing, and sharing unique and complex passwords – a must-have for security.
There are a number of reputable password managers out there. Businesses should prioritize those that work cross-platform and offer affordable plans. They should consider if the solution can be deployed in the cloud or on-premises. A self-hosting option is often preferred by some organizations for security and internal compliance reasons.
Password managers need to be easy-to-use for every level of user – from beginner to advanced. Any employee should be able to get up and running in minutes on the devices they use.
As of late, many businesses have shifted to a remote work model, which has highlighted the importance of online collaboration and the need to share work resources online. With this in mind, businesses should prioritize options that provide a secure way to share passwords across teams. Doing so keeps everyone’s access secure even when they’re spread out across many locations.
Finally, look for password managers built around an open source approach. Being open source means the source code can be vetted by experienced developers and security researchers who can identify potential security issues, and even contribute to resolving them.
Matt Davey, COO, 1Password
65% of people reuse passwords for some or all of their accounts. Often, this is because they don’t have the right tools to easily create and use strong passwords, which is why you need a password manager.
Opt for a password manager that gives you oversight over the things that matter most to your business: from who’s signed in from where, who last accessed certain items, or which email addresses on your domain have been included in a breach.
To keep the admin burden low, look for a password manager that allows you to manage access by groups, delegate admin powers, and manage users at scale. Depending on the structure of your business, it can be useful to grant access to information by project, location, or team.
You’ll also want to think about how a password manager will fit with your existing IAM/security stack. Some password managers integrate with identity providers, streamlining provisioning and administration.
Above all, if you want your employees to adopt your password manager of choice, make sure it’s easy to use: a password manager will only keep you secure if your employees actually use it.
Enterprise resource planning (ERP) systems are an indispensable tool for most businesses. They allow them to track business resources and commitments in real time and to manage day-to-day business processes (e.g., procurement, project management, manufacturing, supply chain, human resources, sales, accounting, etc.).
The various applications integrated in ERP systems collect, store, manage, and interpret sensitive data from the many business activities, which allows organizations to improve their efficiency in the long run.
Needless to say, the security of such a crucial system and all the data it stores should be paramount for every organization.
Common misconceptions about ERP security
“Since ERP systems have a lot of moving parts, one of the biggest misconceptions is that the built-in security is enough. In reality, while you may not have given access to your company’s HR data to a technologist on your team, they may still be able to access the underlying database that stores this data,” Mike Rulf, CTO of Americas Region, Syntax, told Help Net Security.
“Another misconception is that your ERP system’s access security is robust enough that you can allow people to access their ERP from the internet.”
In actual fact, the technical complexity of ERP systems means that security researchers are constantly finding vulnerabilities in them, and businesses that make them internet-facing and don’t think through or prioritize protecting them create risks that they may not be aware of.
When securing your ERP systems you must think through all the different ways someone could potentially access sensitive data and deploy business policies and controls that address these potential vulnerabilities, Rulf says. Patching security flaws is extremely important, as it ensures a safe environment for company data.
Advice for CISOs
While patching is necessary, it’s true that business leaders can’t disrupt day-to-day business activity for every new patch.
“Businesses need some way to mitigate any threats between when patches are released and when they can be fully tested and deployed. An application firewall can act as a buffer to allow a secure way to access your proprietary technology and information during this gap. Additionally, an application firewall allows you to separate security and compliance management from ERP system management enabling the checks and balances required by most audit standards,” he advises.
He also urges CISOs to integrate the login process with their corporate directory service such as Active Directory, so they don’t have to remember to turn off an employee’s credentials in multiple systems when they leave the company.
To make mobile access to ERP systems safer for a remote workforce, CISOs should definitely leverage multi factor identification that forces employees to prove their identity before accessing sensitive company information.
“For example, Duo sends a text to an employee’s phone when logging in outside the office. This form of security ensures that only the people granted access can utilize those credentials,” he explained.
VPN technology should also be used to protect ERP data when employees access it from new devices and unfamiliar Wi-Fi networks.
“VPNs today can enable organizations to validate these new/unfamiliar devices adhere to a minimum security posture: for example, allowing only devices with a firewall configured and appropriate malware detection tools installed can access the network. In general, businesses can’t really ever know where their employees are working and what network they’re on. So, using VPNs to encrypt that data being sent back and forth is crucial.”
On-premise vs. cloud ERP security?
The various SaaS applications in your ERP, such as Salesforce and Oracle Cloud Apps, leave you beholden to those service providers to manage your applications’ security.
“You need to ask your service providers about their audit compliance and documentation. Because they are providing services critical to your business, you will be asked about these third parties by auditors during a SOC audit. You’ll thus need to expand your audit and compliance process (and the time it takes) to include an audit of your external partners,” Rulf pointed out.
“Also, when you move to AWS or Azure, you’re essentially building a new virtual data center, which requires you to build and invest in new security and management tools. So, while the cloud has a lot of great savings, you need to think about the added and unexpected costs of things like expanded audit and compliance.”
One of the cornerstones of a security leader’s job is to successfully evaluate risk. A risk assessment is a thorough look at everything that can impact the security of an organization. When a CISO determines the potential issues and their severity, measures can be put in place to prevent harm from happening.
To select a suitable risk assessment solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Jaymin Desai, Offering Manager, OneTrust
First, consider what type of assessments or control content as frameworks, laws, and standards are readily available for your business (e.g., NIST, ISO, CSA CAIQ, SIG, HIPAA, PCI DSS, NYDFS, GDPR, EBA, CCPA). This is an area where you can leverage templates to bypass building and updating your own custom records.
Second, consider the assessment formats. Look for a technology that can automate workflows to support consistency and streamline completion. This level of standardization helps businesses scale risk assessments to the line of business users. A by-product of workflow-based structured evaluations is the ability to improve your reporting with reliable and timely insights.
One other key consideration is how the risk assessment solution can scale with your business? This is important in evaluating your efficiencies overtime. Are the assessments static exports to excel, or can they be integrated into a live risk register? Can you map insights gathered from responses to adjust risk across your assets, processes, vendors, and more? Consider the core data structure and how you can model and adjust it as your business changes and your risk management program matures.
The solution should enable you to discover, remediate, and monitor granular risks in a single, easy-to-use dashboard while engaging with the first line of your business to keep risk data current and context-rich with today’s information.
Brenda Ferraro, VP of Third Party Risk, Prevalent
The right risk assessment solution will drive program maturity from compliance, to data breach avoidance, to third-party risk management.
There are seven key fundamentals that must be considered:
- Network repository: Uses the ‘fill out once, use with many approach’ to rapidly obtain risk information awareness.
- Vendor risk visibility: Harmonizes inside-out and outside-in vendor risk and proactively shares actionable insights to enhanced decision-making on prioritization, remediation, and compliance.
- Flexible automation: Helps the enterprise to place focus quickly and accurately on risk management, not administrative tasks, to reduce third-party risk management process costs.
- Enables scalability: Adapts to changing processes, risks, and business needs.
- Tangible ROI: Reduces time and costs associated with the vendor management lifecycle to justify cost.
- Advisory and managed services: Has subject matter experts to assist with improving your program by leveraging the solution.
- Reporting and dashboards: Provides real-time intelligence to drive more informed, risk-based decisions internally and externally at every business level.
The right risk assessment solution selection will enable dynamic evolution for you and your vendors by using real-time visibility into vendor risks, more automation and integration to speed your vendor assessments, and by applying an agile, process-driven approach to successfully adapt and scale your program to meet future demands.
Fred Kneip, CEO, CyberGRX
Organizations should look for a scalable risk assessment solution that has the ability to deliver informed risk-reducing decision making. To be truly valuable, risk assessments need to go beyond lengthy questionnaires that serve as a check the box exercises that don’t provide insight and they need to go beyond a simple outside in rating that, alone, can be misleading.
Rather, risk assessments should help you to collect accurate and validated risk data that enables decision making, and ultimately, allow you to identify and reduce risk ecosystem at the individual level as well as the portfolio level.
Optimal solutions will help you identify which vendors pose the greatest risk and require immediate attention as well as the tools and data that you need to tell a complete story about an organization’s third-party cyber risk efforts. They should also help leadership understand whether risk management efforts are improving the organization’s risk posture and if the organization is more or less vulnerable to an adverse cyber incident than it was last month.
Jake Olcott, VP of Government Affairs, BitSight
Organizations are now being held accountable for the performance of their cybersecurity programs, and ensuring businesses have a strong risk assessment strategy in place can have a major impact. The best risk assessment solutions meet four specific criteria— they are automated, continuous, comprehensive and cost-effective.
Leveraging automation for risk assessments means that the technology is taking the brunt of the workload, giving security teams more time back to focus on other important tasks to the business. Risk assessments should be continuous as well. Taking a point-in-time approach is inadequate, and does not provide the full picture, so it’s important that assessments are delivered on an ongoing basis.
Risk assessments also need to be comprehensive and cover the full breadth of the business including third and fourth party risks, and address the expanding attack surface that comes with working from home.
Lastly, risk assessments need to be cost-effective. As budgets are being heavily scrutinized across the board, ensuring that a risk assessment solution does not require significant resources can make a major impact for the business and allow organizations to maximize their budgets to address other areas of security.
Mads Pærregaard, CEO, Human Risks
When you pick a risk assessment tool, you should look for three key elements to ensure a value-adding and effective risk management program:
1. Reduce reliance on manual processes
2. Reduce complexity for stakeholders
3. Improve communication
Tools that rely on constant manual data entry, remembering to make updates and a complicated risk methodology will likely lead to outdated information and errors, meaning valuable time is lost and decisions are made too late or on the wrong basis.
Tools that automate processes and data gathering give you awareness of critical incidents faster, reducing response times. They also reduce dependency on a few key individuals that might otherwise have responsibility for updating information, which can be a major point of vulnerability.
Often, non-risk management professionals are involved with or responsible for implementation of mitigating measures. Look for tools that are user-friendly and intuitive, so it takes little training time and teams can hit the ground running.
Critically, you must be able to communicate the value that risk management provides to the organization. The right tool will help you keep it simple, and communicate key information using up-to-date data.
Steve Schlarman, Portfolio Strategist, RSA Security
Given the complexity of risk, risk management programs must rely on a solid technology infrastructure and a centralized platform is a key ingredient to success. Risk assessment processes need to share data and establish processes that promote a strong governance culture.
Choosing a risk management platform that can not only solve today’s tactical issues but also lay a foundation for long-term success is critical.
Business growth is interwoven with technology strategies and therefore risk assessments should connect both business and IT risk management processes. The technology solution should accelerate your strategy by providing elements such as data taxonomies, workflows and reports. Even with best practices within the technology, you will find areas where you need to modify the platform based on your unique needs.
The technology should make that easy. As you engage more front-line employees and cross-functional groups, you will need the flexibility to make adjustments. There are some common entry points to implement risk assessment strategies but you need the ability to pivot the technical infrastructure towards the direction your business needs.
You need a flexible platform to manage multiple dimensions of risk and choosing a solution provider with the right pedigree is a significant consideration. Today’s risks are too complex to be managed with a solution that’s just “good enough.”
Yair Solow, CEO, CyGov
The starting point for any business should be clarity on the frameworks they are looking to cover both from a risk and compliance perspective. You will want to be clear on what relevant use cases the platform can effectively address (internal risk, vendor risk, executive reporting and others).
Once this has been clarified, it is a question of weighing up a number of parameters. For a start, how quickly can you expect to see results? Will it take days, weeks, months or perhaps more? Businesses should also weigh up the quality of user experience, including how difficult the solution is to customize and deploy. In addition, it is worth considering the platform’s project management capabilities, such as efficient ticketing and workflow assignments.
Usability aside, there are of course several important factors when it comes to the output itself. Is the data produced by the solution in question automatically analyzed and visualized? Are the automatic workflows replacing manual processes? Ultimately, in order to assess the platform’s usefulness, businesses should also be asking to what extent the data is actionable, as that is the most important output.
This is not an exhaustive list, but these are certainly some of the fundamental questions any business should be asking when selecting a risk assessment solution.
As time passes, state-backed hacking is becoming an increasingly bigger problem, with the attackers stealing money, information, credit card data, intellectual property, state secrets, and probing critical infrastructure.
While Chinese, Russian, North Korean and Iranian state-backed APT groups get most of the spotlight (at least in the Western world), other nations are beginning to join in the “fun.”
It’s a free for all, it seems, as the world has yet to decide on laws and norms regulating cyber attacks and cyber espionage in peacetime, and find a way to make nation-states abide by them.
There is so far one international treaty on cybercrime (The Council of Europe Convention on Cybercrime) that is accepted by the nations of the European Union, United States, and other likeminded allies, notes Dr. Panayotis Yannakogeorgos, and it’s contested by Russia and China, so it is not global and only applies to the signatories.
Dr. Yannakogeorgos, who’s a professor and faculty lead for a graduate degree program in Global Security, Conflict, and Cybercrime at the NYU School of Professional Studies Center for Global Affairs, believes this treaty could be both a good model text on which nations around the world can harmonize their own domestic criminal codes, as well as the means to begin the lengthy diplomatic negotiations with Russia and China to develop an international criminal law for cyber.
Cyber deterrence strategies
In the meantime, states are left to their own devices when it comes to devising a cyber deterrence strategy.
The US has been publicly attributing cyber espionage campaigns to state-backed APTs and regularly releasing technical information related to those campaigns, its legislators have been introducing legislation that would lead to sanctions for foreign individuals engaging in hacking activity that compromises economic and national security or public health, and its Department of Justice has been steadily pushing out indictments against state-backed cyber attackers and spies.
But while, for example, indictments by the US Department of Justice cannot reasonably be expected to result in the extradition of a hacker who has been accused of stealing corporate or national security secrets, the indictments and other forms of public attribution of cyber enabled malicious activities serve several purposes beyond public optics, Dr. Yannakogeorgos told Help Net Security.
“First, they send a clear signal to China and the world on where the United States stands in terms of how governmental resources in cyberspace should be used by responsible state actors. That is, in order to maintain fair and free trade in a global competitive environment, a nation’s intelligence services should not be engaged in stealing corporate secrets and then handing those secrets over to companies for their competitive advantage in global trade,” he explained.
“Second, making clear attribution statements helps build a framework within which the United States can work with our partners and allies on countering threats. This includes joint declarations with allies or multilateral declarations where the sources of threats and the technical nature of the infrastructure used in cyber espionage are declared.”
Finally, when public attribution is made, technical indicators of compromise, toolsets used, and other aspects are typically released as well.
“These technical releases have a very practical impact in that they ‘burn’ the infrastructure that a threat actor took time, money, and talent to develop and requires them to rebuild or retool. Certainly, the malware and other infrastructure can still be used against targets that have not calibrated their cyber defenses to block known pathways for attack. Defense is hard, and there is a complex temporal dimension to going from public indicators of compromise in attribution reports; however, once the world knows it begins to also increase the cost on the attacker to successfully hack a target,” he added.
“In general, a strategy that is focused on shaping the behavior of a threat needs to include actively dismantling infrastructure where it is known. Within the US context, this has been articulated as persistently engaging adversaries through a strategy of ‘defending forward.’”
The problem of attack attribution
The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed.
Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well.
Granted, things get trickier when these actors are working for or on behalf of a nation-state.
“If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained.
“However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side. Such analytic frameworks could become norms themselves to help raise the evidentiary standard for attribution of cyber activities to specific nation states.”
A few years ago, Paul Nicholas (at the time the director of Microsoft’s Global Security Strategy) and various researchers proposed the creation of an independent, global organization that would investigate and publicly attribute major cyber attacks – though they admitted that, in some cases, decisive attribution may be impossible.
More recently, Kristen Eichensehr, a Professor of Law at the University of Virginia School of Law with expertise in cybersecurity issues and cyber law, argued that “states should establish an international law requirement that public attributions must include sufficient evidence to enable crosschecking or corroboration of the accusations” – and not just by allies.
“In the realm of nation-state use of cyber, there have been dialogues within the United Nations for nearly two decades. The most recent manifestation is the UN Group of Governmental Experts that have discussed norms of responsible state behavior and issued non-binding statements to guide nations as they develop cyber capabilities,” Dr. Yannakogeorgos pointed out.
“Additionally, private sector actors, such as the coalition declaring the need for a Geneva Convention for cyberspace, also have a voice in the articulation of norms. Academic groups such as the group of individuals involved in the research, debating, and writing of the Tallinn Manuals 1.0 and 2.0 are also examples of scholars who are articulating norms.”
And while articulating and agreeing to specific norms will no doubt be a difficult task, he says that their implementation by signatories will be even harder.
“It’s one thing to say that ‘states will not target each other’s critical infrastructure in cyberspace during peacetime’ and another to not have a public reaction to states that are alleged to have not only targeted critical infrastructure but actually caused digital damage as a result of that targeting,” he concluded.
As COVID-19 forced organizations to re-imagine how the workplace operates just to maintain basic operations, HR departments and their processes became key players in the game of keeping our economy afloat while keeping people alive.
Without a doubt, people form the core of any organization. The HR department must strike an increasingly delicate balance while fulfilling the myriad of needs of workers in this “new normal” and supporting organizational efficiency. As the tentative first steps of re-opening are being taken, many organizations remain remote, while others are transitioning back into the office environment.
Navigating the untested waters of managing HR through this shift to remote and back again is complex enough without taking cybercrime and data security into account, yet it is crucial that HR do exactly that. The data stored by HR is the easy payday cybercriminals are looking for and a nightmare keeping CISOs awake at night.
Why securing HR data is essential
If compromised, the data stored by HR can do a devastating amount of damage to both the company and the personal lives of its employees. HR data is one of the highest risk types of information stored by an organization given that it contains everything from basic contractor details and employee demographics to social security numbers and medical information.
Many state and federal laws and regulations govern the storage, transmission and use of this high value data. The sudden shift to a more distributed workforce due to COVID-19 increased risks because a large portion of the HR workforce being remote means more and higher access levels across cloud, VPN, and personal networks.
Steps to security
Any decent security practitioner will tell you that no security setup is foolproof, but there are steps that can me taken to significantly reduce risk in an ever-evolving environment. A multi-layer approach to security offers better protection than any single solution. Multiple layers of protection might seem redundant, but if one layer fails, the other layers work fill in gaps.
Securing HR-related data needs to be approached from both a technical and end user perspective. This includes controls designed to protect the end user or force them into making appropriate choices, and at the same time providing education and awareness so they understand how to be good stewards of their data.
Secure the identity
The first step to securing HR data is making sure that the ways in which users access data are both secure and easy to use. Each system housing HR data should be protected by a federated login of some variety. Federated logins use a primary source of identity for managing usernames and passwords such as Active Directory.
When a user uses a federated login, the software utilizes a system like LDAP, SAML, or OAuth to query the primary source of identity to validate the username and password, as well as ensure that the user has appropriate rights to access. This ensures that users only have to learn one username and password and we can ensure that the password complies with organizationally mandated complexity policies.
The next step to credential security is to add a second factor of authentication on every system storing HR data. This is referred to as Multi-factor Authentication (MFA) and is a vital preventative measure when used well. The primary rule of MFA says that the second factor should be something “the user is or has” to be most effective.
This second factor of authentication can be anything from a PIN generated on a mobile device to a biometric check to ensure the person entering the password is, in fact, the actual owner. Both of these systems are easy for end users to use and add very little additional friction to the authentication effort, while significantly reducing the risk of credential theft, as it’s difficult for someone to compromise users’ credentials and steal their mobile device or a copy of their fingerprints.
In today’s world, HR users working from somewhere other than the office is not unusual. With this freedom comes the need to secure the means by which they access data, regardless of the network they are using. The best way to accomplish this is to set up a VPN and ensure that all HR systems are only accessible either from inside of the corporate network or from IPs that are connected to the VPN.
A VPN creates an encrypted tunnel between the end user’s device and the internal network. The use of a VPN protects the user against snooping even if they are using an unsecured network like a public Wi-Fi at a coffee shop. Additionally, VPNs require authentication and, if that includes MFA, there are three layers of security to ensure that the person connecting in is a trusted user.
Next, you have to ensure that access is being used appropriately or that no anomalous use is taking place. This is done through a combination of good logging and good analytics software. Solutions that leverage AI or ML to review how access is being utilized and identify usage trends further increase security. The logging solution verifies appropriate usage while the analysis portion helps to identify any questionable activity taking place. This functions as an early warning system in case of compromised accounts and insider threats.
Comprehensive analytics solutions will notice trends in behavior and flag an account if the user changes their normal routine. If odd activity occurs (e.g., going through every HR record), the system alerts an administrator to delve deeper into why this user is viewing so many files. If it notices access occurring from IP ranges coming in through the VPN from outside of the expected geographical areas, accounts can be automatically disabled while alerts are sent out and a deeper investigation takes place. This are ways to shrink the scope of an incident and reduce the damage should an attack occur.
Secure the user
Security awareness training for end users is one of the most essential components of infrastructure security. The end user is a highly valuable target because they already have access to internal resources. The human element is often considered a high-risk factor because humans are easier to “hack” than passwords or automatic security controls.
Social engineering attacks succeed when people aren’t educated to spot red flags indicating an attack is being attempted. Social engineering attacks are the easiest and least costly option for an attacker because any charismatic criminal with good social skills and a mediocre acting ability can be successful. The fact that this type of cyberattack requires no specialized technical skill expands the potential number of attackers.
The most important step of a solid layered security model is the one that prevent these attacks through education and awareness. By providing end users engaging, thorough, and relevant training about types of attacks such as phishing and social engineering, organizations arm their staff with the tools they need to avoid malicious links, prevent malware or rootkit installation, and dodge credential theft.
No perfect security
No matter where the job gets done, HR needs to deliver effective services to employees while still taking steps to keep employee data safe. Even though an organization cannot control every aspect of how work is getting done, these steps will help keep sensitive HR data safe.
Control over accounts, how they are monitored, and what they are accessing are important steps. Arming the end user directly, with the awareness needed to prevent having their good intentions weaponized, requires a combination of training and controls that create a pro-active system of prevention, early warnings, and swift remediation. There is no perfect security solution for protecting HR data, but multiple, overlapping security layers can protect valuable HR assets without making it impossible for HR employees to do their work.
Endpoint protection has evolved to safeguard from complex malware and evolving zero-day threats.
To select an appropriate endpoint protection solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Theresa Lanowitz, Head of Evangelism, AT&T Cybersecurity
Corporate endpoints represent a top area of security risk for organizations, especially considering the shift to virtual operations brought on by COVID-19. As malicious actors target endpoints with new types of attacks designed to evade traditional endpoint prevention tools, organizations must seek out advanced endpoint detection and response (EDR) solutions.
Traditionally, enterprise EDR solutions carry high cost and complexity, making it difficult for organizations to implement EDR successfully. While many security teams recognize the need for EDR, most do not have the resources to manage a standalone endpoint security solution.
For this reason, when selecting an EDR solution, it’s critical to seek a unified solution for threat detection, incident response and compliance, to be incorporated into an organization’s existing security stack, eliminating any added cost or complexity. Look for endpoint solutions where security teams can deploy a single platform that delivers advanced EDR combined with many other essential security capabilities in a single pane of glass, in an effort to drive efficiency of security and network operations.
Overall, organizations should select an EDR solution that enables security teams to detect and respond to threats faster while eliminating the cost and complexity of maintaining yet another point security solution. This approach can help organizations bolster their cybersecurity and network resiliency, with an eye towards securing the various endpoints used in today’s virtual workforce.
Rick McElroy, Cyber Security Strategist, VMware Carbon Black
With the continuously evolving threat landscape, there are a number of factors to consider during the selection process. Whether a security team is looking to replace antiquated malware prevention or empower a fully-automated security operations process, here are the key considerations:
- Does the platform have the flexibility for your environment? Not all endpoints are the same, therefore broad coverage of operating systems is a must.
- Does the vendor support the MITRE ATT&CK Framework for both testing and maturing the product? Organizations need to test security techniques, validate coverage and identify gaps in their environments, and implement mitigation to reduce attack surface.
- Does it provide deeper visibility into attacks than traditional antivirus? Organizations need deeper context to make a prevention, detection or response decision.
- Does the platform provide multiple security functionality in one lightweight sensor? Compute is expensive, endpoint security tools should be as non-impactful to the system as possible.
- Is the platform usable at scale? If your endpoint protection platform isn’t centrally analyzing behaviors across millions of endpoints, it won’t be able to spot minor fluctuations in normal activity to reveal attacks.
- Does the vendor’s roadmap meet the future needs of the organization? Any tool selected should allow teams the opportunity for growth and ability to use it for multiple years, building automated processes around it.
- Does the platform have open APIs? Teams want to integrate endpoints with SEIM, SOAR platforms and network security systems.
David Ngo, VP Metallic Products and Engineering, Commvault
With millions working remotely due to COVID-19, laptop endpoints being used by employees while they work from home are particularly vulnerable to data loss.
This has made it more important than ever for businesses to select a strong endpoint protection solution that:
- Lowers the risk of lost data. The best solutions have automated backups that run multiple times during the day to ensure recent data is protected and security features such as geolocation and remote wipe for lost or stolen laptops. Backup data isolation from source data can also provide an extra layer of protection from ransomware. In addition, anomaly detection capabilities can identify abnormal file access patterns that indicate an attack.
- Enables rapid recovery. If an endpoint is compromised, the solution should accelerate data recovery by offering metadata search for quick identification of backup data. It’s also important for the solution to provide multiple granular restore options – including point in time, out of place, and cross OS restores – to meet different recovery needs.
- Limits user and IT staff administration burdens. Endpoint solutions with silent install and backup capabilities require no action from end users and do not impact their productivity. The solution should also allow users and staff to access backup data, anytime, anywhere, from a browser-enabled device, and make it possible for employees to search and restore files themselves.
James Yeager, VP of Public Sector, CrowdStrike
Decision-makers seeking the best endpoint protection (EPP) solution for their business should be warned legacy security solutions are generally ineffective, leaving organizations highly susceptible to breaches, placing a huge burden on security teams and users.
Legacy tools, engineered by on-premises architectures, are unable to keep up with the capabilities made available in a modern EPP solution, like collecting data in real-time, storing it for long periods and analyzing it in a timely manner. Storing threat telemetry data in the cloud makes it possible to quickly search petabytes of data in an effort to glean historical context for activities running on any managed system.
Beware of retrofitted systems from vendors advertising newer “cloud-enabled” features. Simply put, these “bolt-on” models are unable to match the performance of a cloud-native solution. Buyers run the risk of their security program becoming outdated with tools that cannot scale to meet the growing needs of today’s modern, distributed workforce.
Furthermore, comprehensive visibility into the threat landscape and overall IT hygiene of your enterprise are foundational for efficient security. Implementing cloud-native endpoint detection and response (EDR) capabilities into your security stack that leverages machine learning will deliver visibility and detection for threat protection across the entire kill chain. Additionally, a “hygiene first” approach will help you identify the most critical risk areas early-on in the threat cycle.
Dustin Rigg Hillard, CTO at eSentire, is responsible for leading product development and technology innovation. His vision is rooted in simplifying and accelerating the adoption of machine learning for new use cases.
In this interview Dustin talks about modern digital threats, the challenges cybersecurity teams face, cloud-native security platforms, and more.
What types of challenges do in-house cybersecurity teams face today?
The main challenges that in-house cybersecurity teams have to deal with today are largely due to ongoing security gaps. As a result, overwhelmed security teams don’t have the visibility, scalability or expertise to adapt to an evolving digital ecosystem.
Organizations are moving toward the adoption of modern and transformative IT initiatives that are outpacing the ability of their security teams to adapt. For security teams, this means constant change, disruptions with unknown consequences, increased risk, more data to decipher, more noise, more competing priorities, and a growing, disparate, and diverse IT ecosystem to protect. The challenge for cybersecurity teams is finding effective ways to deliver and maintain security at the speed of digital transformation, ensuring that every new technology, digital process, customer and partner interaction and innovation is protected.
Cybercrime is being conducted at scale, and threat actors are constantly changing techniques. What are the most significant threats at the moment?
Threat actors, showing their usual agility, have shifted efforts to target remote workers and take advantage of current events. We are seeing attackers exploiting user behavior by misleading users into opening and executing a malicious file, going to a malicious site or handing over information, typically using lures which create urgency (e.g., by masquerading as payment and invoice notifications) or leverage current crises and events.
What are the main benefits of cloud-native security platforms?
A cloud-native platform offers important advantages over legacy approaches—advantages that provide real, important benefits for cybersecurity providers and the clients who depend on them.
- A cloud-native architecture is more easily extensible, which means more features, sooner, to enable analysts and protect clients
- A cloud-native platform offers higher performance because the microservices inside it can maximally utilize the cloud’s vast compute, storage and network resources; this performance is necessary to ingest and process the vast streams of data which need to be processed to keep up with real-time threats
- A cloud-native platform can effortlessly scale to handle increased workloads without degradation to performance or client experience
Security platforms usually deliver a variety of metrics, but how does an analyst know which ones are meaningful?
The most important metrics are:
- How platform delivers security outcomes
- How many threats were stopped with active response?
- How many potentially malicious connections were blocked?
- How many malware executions were halted?
- How quickly was a threat contained after initial detection?
Modern security platforms help simplify data analytics by delivering capabilities that amplify threat detection, response and mitigation activities; deliver risk-management insights; and help organizations stay ahead of potential threats.
Cloud-native security platforms can output a wide range of data insights including information about threat actors, indicators of compromise, attack patterns, attacker motivations and capabilities, signatures, CVEs, tactics, and vulnerabilities.
How can security teams take advantage of the myriad of security tools that have been building in the organization’s IT ecosystem for many years?
Cloud-native security platforms ingest data from a wide variety of sources such as security devices, applications, databases, cloud systems, SaaS platforms, IoT devices, network traffic and endpoints. Modern security platforms can correlate and analyze data from all available sources, providing a complete picture of the organization’s environment and security posture for effective decision-making.
The process of evaluating solid state drives (SSDs) for enterprise applications can present a number of challenges. You want maximum performance for the most demanding servers running mission-critical workloads.
We sat down with Scott Hamilton, Senior Director, Product Management, Data Center Systems at Western Digital, to learn more about SSDs and how they fit into current business environments and data centers.
What features do SSDs need to have in order to offer uncompromised performance for the most demanding servers running mission-critical workloads in enterprise environments? What are some of the misconceptions IT leaders are facing when choosing SSDs?
First, IT leaders must understand environmental considerations, including the application, use case and its intended workload, before committing to specific SSDs. It’s well understood that uncompromised performance is paramount to support mission critical workloads in the enterprise environment. However, performance has different meanings to different customers for their respective use cases and available infrastructure.
Uncompromised performance may focus more on latency (and associated consistency), IOPs (and queue depth) or throughput (and block size) depending on the use case and application.
Additionally, the scale of the application and solution dictate the level of emphasis, whether it be interface-, device-, or system-level performance. Similarly, mission-critical workloads may have different expectations or requirements e.g. high availability support, disaster recovery, or performance and performance consistency. This is where IT leaders need to rationalize and test the best fit for their use case.
Today there are many different SSD segments that fit certain types of infrastructure choices and use cases. For example, PCIe SSD options are available from boot drives to performance NVMe SSDs and they come in different form factors such as M.2 (ultra- light and thin) and U.2 (standard 2.5-inch) to name a few. It’s also important to consider power/performance. Some applications do not require interface saturation, and can leverage low-power, single-port mainstream SSDs instead of dual-port, high-power, higher-endurance and higher-performance drives.
IT managers have choices today, which they should consider carefully to rationalize, optimize, infrastructure elasticity and scaling, test and ultimately align their future system architecture strategies when it comes to choosing the best fit SSD. My final word of advice: Sometimes it is not wise to pick the highest performing SSD available on the market as you do not want to pay for a rocket engine for a bike. Understanding the use case and success metrics – e.g., price-capacity, latency, price performance (either $/IOPs or $/GB/sec) – will help eliminate some of the misconceptions IT leaders face when choosing SSDs.
How has the pandemic accelerated cloud adoption and how has that translated to digital transformation efforts and the creation of agile data infrastructures?
The rapid increase in our global online footprint is stressing IT infrastructure from virtual office, live video calls, online classes, healthcare services and content streaming to social media, instant messaging services, gaming and e-commerce. This is the new normal of our personal and professional lives. There is no doubt that the pandemic has increased dependence on cloud data centers and services. Private, public and hybrid cloud use cases will continue to co-exist due to costs, data governance and strategies, security and legacy application support.
Digital transformation continues all around us, and the pandemic accelerated these initiatives. Before the pandemic, digital transformation projects generally spanned over several years with lengthy and exhaustive cycles to go online and scale up their web foot print. However, 2020 has really surprised all of us. Tectonic shifts have happened (and are still happening) with projects now taking only weeks or months even for businesses that are learning to scale up for the first time.
This infrastructure stress will further accelerate technological shifts at as well, whether it be from SAS to NVMe at the endpoints or from DAS- or SAN-based solutions to NVMe over Fabrics (NVMe-oF) based solutions to deliver greater agility to meet both dynamic and unforeseen demands of the future.
Organizations are scrambling to update their infrastructure, and many are battling inefficient data silos and large operational expenses. How can data centers take full advantage of modern NVMe SSDs?
NVMe SSDs are playing a pivotal role in making the new reality possible for the people and businesses around the world. As users transition from SAS and SATA, NVMe is not only increasing overall system performance and utilization, it’s creating next-generation flexible and agile IT infrastructure as well. Capitalizing on the power of NVMe, SSDs now enable data centers to run more services on their hardware i.e., improved utilization. This is an important consideration for IT leaders and organizations who are looking to improve efficiencies.
NVMe SSDs are helping both public and private cloud infrastructures in various areas such as the highest performance storage, the lowest latency interface and the flexibility to support needs from boot to high-performance compute as well as infrastructure productivity. NVMe supports enterprise specifications for server and storage systems such as namespaces, virtualization support, scatter gather list, reservations, fused operations, and emerging technologies such as Zoned Namespaces (ZNS).
Additionally, NVMe-oF extends the benefits of NVMe technology and enables sharing data between hosts and NVMe-based platforms over a fabric. The ratification of the NVMe 1.4 and NVMe-oF 1.1 specifications, with the addition of ZNS, have further strengthened NVMe’s position in enterprise data centers. Therefore, by introducing NVMe SSDs into their infrastructure, organizations will have the tools to get more from their data assets.
What kind of demand for faster hardware do you expect in the next five years?
Now and into the future, data centers of all shapes and sizes are constantly striving to achieve greater scalability, efficiencies and increased productivity and responsiveness with the best TCO. Business leaders and IT decision-makers must understand and navigate through the complexities of cloud, edge and hybrid on-prem data center technologies and architectures, which are increasingly being relied upon to support a growing and complex ecosystem of workloads, applications and AI/IoT datasets.
More than a decade ago, IT systems used to rely on software running on dedicated general purpose systems for any applications. This created many inefficiencies and scaling challenges, especially with large scale system designs. Today, data dependence has been consistently and exponentially growing, which has forced data center architects to decouple the applications from the systems. This was the birth of the HCI market and now the composable disaggregated infrastructure market.
Next-generation infrastructures are moving to disaggregated, pooled resources (e.g., compute, accelerators and storage) that can be dynamically composed to meet the ever increasing and somewhat unpredictable demands of the future. All of this allows us to make efficient use of hardware to increase infrastructure agility, scalability and software control, remove various system bottlenecks and improve overall TCO.
Domain-based Message Authentication, Reporting & Conformance (DMARC), is an email authentication, policy, and reporting protocol. It builds on the SPF and DKIM protocols to improve and monitor protection of the domain from fraudulent email.
To select a suitable DMARC solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Scott Croskey, Global CISO, Cipher
DMARC solutions add security to business email systems by ensuring DKIM and SPF standards are in place to mitigate risks from fraudulent use. They evaluate every inbound and outbound email for these security standards and can integrate with Secure Email Gateway solutions to block malicious activity.
When evaluating DMARC solutions, you should focus on vendors that employ the following features:
- Cloud-based (SaaS) deployment. This eases the burden on company IT teams, allowing for the solution to be easily deployed and configured with out-of-the-box security policies.
- Domain diagnosis. This will ensure your business is aware of any domain vulnerabilities, many of which can be common for SMBs to overlook and consequently increase their risk.
- User friendly dashboard. This will ensure your team does not need a lot of time to understand how the solution works.
For larger companies, you should also consider vendors that employ:
- Forensic reporting. This allows for detailed information on why emails may have failed DMARC checks and allow for additional system tuning.
- DNS record change tracking. This allows for additional insight into malicious activity.
- API integration. Large companies typically have internal dashboards and workflows. API Integration with the DMARC solution will allow you to tailor the solution into your enterprise reporting & analysis tools.
Len Shneyder, VP of Industry Relations, Twilio
A company that wants to achieve DMARC enforcement should consider a walk, crawl, run approach as DMARC doesn’t work unless you have published SPF and DKIM. DMARC essentially communicates a policy and set of prescriptive actions to a receiving domain on what to do if an email fails an SPF or DKIM check.
If a company has the technical aptitude to publish SPF and DKIM then it stands to reason they can publish one more policy. However, when a sophisticated enterprise begins working with third parties that want to send emails on behalf of that company, in the form of an email service provider for marketing communications, a ticketing system, an internal HR tool, or all of the above and more, then the DMARC policy becomes much more complicated and a company might consider turning to one of a small field of companies that have automated the process of reaching enforcement.
The question of which provider to choose really rests around the complexity and breath of your company. Different providers will be suited to different sized companies—however, if you haven’t reached that scale yet, then there’s no reason why you couldn’t do it yourself.
Chuck Swenberg, SVP Strategy, Red Sift
It used to be that interpreting DMARC reports, which provide a view of mail authentication results of every IP that’s being used to send mail on behalf of your domain, was sufficient. However, these traditional stand-alone DMARC tools linked with professional services are increasingly no longer cost effective or time sensitive to organization needs. The continuing rise of email threat volumes and increased diversification and enablement of app/cloud services for email require strong diligence in selecting a solution. DMARC should also no longer be viewed just as a one-time configuration project.
- Accuracy: What is the level of completeness for classification of IP’s from the reports of mail senders and subsequent categorization that represent the mail that belongs to my organization?
- Insight: Is there a clear, defined workflow process in the solution? The best solutions will have easy to use, staged flows that display recommended actions and contextual guides from the data presented to explain misconfigurations in email authentication. Data needs to be actionable with insight.
- Automation: How long will it take my organization to implement DMARC? How can I effectively maintain a DMARC enforcement policy on an ongoing basis? More recent platform solutions for DMARC use hosted management for SPF authentication which allows for expansion past the 10 SPF lookup limit and provides a far more reliable and resilient email delivery. Ongoing automated monitoring with alerting which recognizes changes in authentication, identifies new sources and takes immediate action should be requirements.
- Value: How much should I budget and how can total cost and time resources be efficiently managed? Look for automation of defined actions and applying expertise to specifically implement those actions in the best manner for the organization. This will help limit the dependency on external professional services and result in significantly lower costs over time.
Automation is fundamental to selecting a solution that significantly lowers cost and reduces time to implementation of DMARC and ensures the more reliable approach to email handling and delivery of your organization’s email.
Anna Ward, Head of Deliverability, Postmark
A good DMARC solution should clearly identify high-risk sources, forwarders, and common email providers. It should provide actionable next steps in mitigating risk and minimize details until you actually need them. Avoid solutions that don’t show all authentication domains, differentiating between just passing SPF/DKIM and alignment.
Remember that adding a DMARC solution is essentially just adding a reporting address to your policy, so try on a few (or several at a time) if you’re curious about any provider.
How hands-on do you want to be? Will you regularly access the data via API, the app/website, email digests, etc? For sharing the data with multiple people/teams, look for secure multi-user management. Want a human guiding your progress, or do you prefer the ability to self-serve? Finally consider whether you’d point your DNS records to your DMARC provider, as some will include/exclude sending sources for you.
- If you have many low-sending domains, look for tiered pricing by volume. Some are even free below a certain volume.
- If you have a higher-volume domain, look for pricing per monitored domain. This also limits price fluctuations, especially if there’s a surge in unauthorized mail.
- With both pricing options, check whether they include monitoring for subdomains inheriting the DMARC policy from the main domain.
Time and again (and again), survey results tell us that many cybersecurity professionals are close to burnout and are considering quitting their jobs or even leaving the cybersecurity industry entirely.
The reasons for this dire situation vary depending on their role and position within the organization. For example, a recent Ponemon report has revealed that security operations center (SOC) team members are stressed by many things: from increasing workloads, lack of visibility in to the network and IT infrastructure and being on call 24/7/365, to information and alert overload, inability to recruit and retain expert personnel, and lack of resources.
When asked what steps can be taken to alleviate their SOC team’s pain, the pollees’ responses were also wide-ranging (multiple responses were permitted):
In a lively discussion that followed the publication of the report, Joshua Marpet, Chief Operating Officer of Red Lion and long-time tech and security professional, noted that there’s also other things that are getting SOC members down.
“SOC has little career path, very little respect inside or outside the industry, massive responsibilities, not the best pay, and almost no authority to do anything about what they find,” he pointed out.
The problem(s) with the SOC analyst role
“In olden days, being a SOC analyst was a respected gig. Entry-level SOC analyst was how you broke into the industry, learned about alarms, alerts, and notifications, and earned your chops in incident response, root cause analysis, report writing/documentation, and potentially, if you were awesome, in presenting it to the boss(es). Then you were either put on the incident response team, or moved over to digital forensics, or you could maybe switch a bit to DevOps/SecDevOps if that caught your interest. Even pentesting, if you got really good at blue teaming, which is a pretty good pathway into breaking and red teaming,” Marpet explained what he meant to Help Net Security.
“Now, in many companies, SOC analyst is a dead-end job. With the extreme specialization and commoditization of SOC analyst jobs, anything interesting is taken away almost immediately: ‘Oh! This looks bad, send it to Incident Response!’ or ‘I’m not sure what this is, send it to Security!’ SOC analysts became security dispatchers a while ago.”
K.C. Yerrid, an IT security professional who’s no stranger to burnout, also says that it’s difficult to grow from a SOC analyst role in an organization.
“There are six documented causes of burnout: workload, perceived lack of control, insufficient reward, strength of community, fairness, and a values mismatch. Any or all of these can exist and do exist at the SOC Analysts level,” he noted.
“Alert fatigue (workload) is a real phenomenon, and the rate at which alerts can come in could lead to a perceived lack of control in the outcome of one’s responses. We all know that SOC analyst jobs lack sufficient reward, and company culture dictates the strength of community. Finally, as mentioned, it’s an uphill climb to be promoted out of a SOC analyst role. The value mismatch can come from the manager or organizational level.”
A SOC is still a great place to learn all of the above things, but it is generally not a career path starter, Marpet notes.
“If it’s a job you can get, take it – for a year,” he counseled. “Unless you find a great place. I’ve heard that Dave Kennedy’s Binary Defense is a fantastic place. Lots of good places still exist. You just have to find them.”
To SOC analysts who are overworked and close to burning out, he advises thinking hard about the next step.
“If you’re understaffed and overworked due to COVID-19, and it should let up in a month or two, that’s ok. But if your manager is not taking care of you, informing you of what’s happening, if your company has shown no sign of fixing the issue, or set timelines to fix it, why are you there? Go network and find another job. If you have problems doing that, go to CyberSecJobs.com, and check out their listings. If you’re scared of change, hit me up – I do career guidance all the time.”
For those who decide to stay where they are, there’s always the option to try and minimize or remove the stressors that can lead to burnout.
Advice for entering and staying in infosec
To those just entering the information security industry, Marpet advises figuring out who’s the go-to person(s) for the field they want to specialize in – say, digital forensics or pentesting – then finding out when and where they’re speaking.
“Go there, say hello. Don’t gush, don’t beg, don’t cry – just say ‘Hi! Nice to meet you!’ About the fourth time you do this, you’ll see them answering a question you have an opinion on. Mention it. If it’s a good point, you’ll make them think.Then they recognize you from the times you said hi. They know you have a brain. And they know they want to know you.”
Those who still don’t know what they want to concentrate on should go to a conference (when and where possible), meet people, find a village with interesting stuff going on, ask questions, watch and learn.
“Networking is your friend. Meet people. Set up your LinkedIn. People will change their email address, but not their LinkedIn, or MeWe, or whatever is your social network of choice. Say hello and interact with them.”
For staying and thriving in the infosec industry, his best recommendation is to always keep learning: set up a home lab, a development environment, or anything else that will keep you learning everyday.
“Do you know how awesome it is as an interviewer to hear the interviewee get excited about their home lab or new open source tool they just put a commit into or a firewall vuln they figured out? That gets you hired anywhere and everywhere,” he stressed.
Looming infosec industry challenges
Coincidentally, continuous knowledge acquisition is also a way to counteract one of the key challenges the information security industry will have to deal with over the next fixe years: the rising tide of ineptitude.
Colleges are churning out qualified graduates, he says, but many of them are actually not. Infosec has become an overhyped profession, a “sexy” option for those who want to be “cool”. But infosec is a mindset as well as a job, he points out. Most importantly, at the end of the day, you have to be able to do the job.
Other imminent infosec industry challenges? Data security and artificial intelligence that isn’t intelligent.
“Becoming a data-centric business is vital, but most companies have no idea where their data is, what data they own matters, who has rights to that data, and frankly, what security is wrapped around that data,” he noted.
“AI/ML is awesome, fun, and amazing, but if you ask the wrong questions, or don’t ask questions that are broad enough, or targeted enough, you get garbage output. AI does not think for itself (yet), so it can’t tell you how bad an idea your question is – so you have to be careful.”
Leszek Miś is the founder of Defensive Security, a principal trainer and security researcher with over 15 years of experience. Next week, he’s running an amazing online training course – In & Out – Network Exfiltration and Post-Exploitation Techniques [RED Edition] at HITBSecConf 2020 Singapore, so it was the perfect time for an interview.
What are the main characteristics of modern adversary behavior? What should enterprise security teams be on the lookout for?
This is a very open question as it depends on the attacker’s skillset and offensive experience. Modern adversaries like to behave in various ways. Don’t forget it’s also closely related to what the target is, and the attacker’s budget.
From what we are seeing in the wild, in most cases an adversary uses a combination of publicly available tools like RATs, offensive C2 frameworks powered up by a big amount of post-exploitation, and lateral movement modules, along with advanced and well-known tactics, techniques and procedures. The goal is to get initial access to the network, pivot over the systems, networks or even OS processes, escalate the privileges if needed, find out the interesting data assets, copy and hide them (sometimes in very unusual network locations), and eventually persist and exfiltrate the data by using a different set of communication channels.
Advanced attackers like to blend into network traffic of the target to become even more stealthy. Adversaries also like to make major modifications to open source tools for making detection harder. CVEs in the form of 0-day or 1-day exploits are often in use.
Big network environments are very hard to maintain and even understand – attackers are very good at that. Proved protection and detection are hard to achieve too. One single parameter or argument visible from the process list could make a significant difference
That’s the reason why companies should constantly test their environments against TTPs. The baseline profiling of your core network components, OS, devices and apps, adversary simulations, achieving full visibility and analytics across many different network data sources, correlation, and understanding of how each component affects the other one seems like a good approach for dealing with cybersecurity risks.
It’s not about if, it’s about when you will become a target. You need to be prepared. That’s the reason why at least understanding of publicly available offensive tools and techniques is crucial in the fight against attackers. We have to train, and learn new stuff every single day as attackers do. We have to test our assumptions in the field of purple teaming where two teams: the red one and the blue one work together simulating real threats and doing detection research at the same time. Without threat hunting, you are blind.
Based on what the market is saying, having a dedicated defensive/offensive training environment ready to use out-of-the-box is a good path that allows us to be prepared. We cannot, however, do much without:
- Understanding what the real threat is
- Solid technological base
- Skilled teams and risk-aware management
- Being up to date
- Dedicated budget for training
- Research time
- Desire to learn.
Based on your experience, what are the most significant misconceptions when it comes to network exfiltration? What are training attendees mostly surprised about?
The most significant misconception when it comes to network exfiltration is incorrect believing that something is impossible without checking: “This box does not have direct internet access so you can’t steal data from it.” Really? That’s the power of the pivoting and the lateral movement phase. During an adversary simulation, it’s always the case.
Show me or let me simulate your scenario and I’ll understand. Training attendees are surprised mostly about two things. The first is the ease of performing certain elements of the attack and the number of possibilities. The second one is related to chained attack scenarios. Whenever you are skilled enough to combine / chain together different techniques, tools, or “exotic” communication channels – you are the winner. You have to spend lots of hours playing to understand and make a progress.
“Feeling the network” is very important. I found also as a very surprising number of possibilities in terms of using valid, normal network channels like cloud-based services for exfiltration or C2. SSH over a Google service? Data exfiltration over Dropbox? C2 over a Slack channel? Is it really possible and so easy at the same time?
What’s your take on using open source tools within an enterprise security architecture?
I have two points of view, they are related to the offensive and defensive side and both are positive. In short, I believe they should be a part of every company’s cybersecurity strategy.
From the offensive perspective, it’s amazing how many free open source tools help with the execution of adversary simulations, penetration testing services or just doing research. Open source delivers flexibility – and I am sure most of the red teamers use or create open source projects while working for large companies. It’s a great value for everyone. Recently, blue teams have started doing the same and we’re seeing some powerful knowledge out there.
From a defensive point of view, OSS is in use almost everywhere and assuming that even if a huge part of the enterprise infrastructure is based on commercial products, you will find open source components. Many commercial products would not be possible without OSS.
I am a big supporter of having critical, security areas covered by OSS. Just to name a few: Zeek IDS, Suricata IDS, Moloch, OSquery and Kolide Fleet, ModSecurity as WAF, Volatility Framework for memory analysis, auditd, iptables, LKRG for Linux kernel hardening, Graylog, Wazuh / OSSEC, (H)ELK, eBPF, theHive, MISP, Sigma rules – it is impossible to list all of them here. These are all very stable projects that can be used as supporting technology or for creating your own SOC environment from scratch. Big kudos to the open source community!
What advice would you give to those just entering the cybersecurity industry and want to work in security operations? What skills should they develop?
Based on my experience I would say that learning the basics is key, without a solid foundation you’ll never understand how things work. I would suggest learning how the network works, how Linux internals work. You should patch and compile your own Linux kernel, and play with system rootkits trying to detect them from the defensive side.
The same small step approach applies to a Windows infrastructure: AD internals, LDAP, Kerberos, GPO, DNS, etc. – all of them matter. At the same time, you could learn virtualization techniques and start doing your first programming steps to eventually get into exploitation or reversing. Making your own research lab or using ready-to-use platforms like PurpleLabs should give you a nice acceleration.
The short and simple answer does not exist, but stubbornness, discernment, enthusiasm, an open mind, hard work, and thousands of hours spent at the computer learning new stuff will eventually allow you to choose the right path in the cybersecurity world.
Network detection and response (NDR) solutions enable organizations to improve their threat response, they help protect against a variety of threats, and also provide visibility into what is actually on the network.
To select an appropriate network detection and response solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Mike Hamilton, CISO, CI Security
Network detection and response uses a spectrum of technology and humans, and the right mix for your organization is highly individual. Here are 3 different mixes to consider:
Managed – Managed detection and response combines technology to collect information from your network, detection analytics to identify aberrational activity, and analysts to investigate, confirm, and conduct response operations along a pre-defined playbooks – as a service.
Operated – In the middle, you’ll own the technology, the people to operate the technology, and the processes for response, recovery, and recordkeeping. This is how many organizations have evolved but are discovering that this is harder to sustain.
Automated – At the technology end of the spectrum is automation: SOAR and other methodologies leverage your preventive and detective controls and integrates them to take an action decided by technology.
To decide whether you will be best served by Managed, Operated, or Automated, ask:
- How fast/easy is deployment?
- Does the solution ingest and analyze all your data sources?
- For Operated – What are the resource costs, including how using resources for security may affect current projects as opportunity cost?
- For Managed – How does the provider source and retain threat hunters and Analysts?
- For Automated – What is the worst-case scenario for a false positive?
Rahul Kashyap, CEO, Awake Security
NDR solutions can protect against non-malware threats, including insider attacks, credential abuse, lateral movement, and data exfiltration. They give organizations greater visibility into what is actually on the network as well as the activity occurring. But not all NDR solutions are equal. To maximize value, it’s recommended buyers consider three key parameters:
- Data: Look for solutions that parse the whole packet rather than just NetFlow or IDS alerts. This provides far more depth of visibility, allowing the solution to identify more relevant threats.
- Machine Learning and AI: Avoid solutions that rely primarily on unsupervised machine learning and act as black boxes. These types of offerings generate significant operational overhead via false positives and negatives, and provide no explanation to the analyst on why something was flagged as an issue.
- Use cases: Reduce tool sprawl by replacing existing solutions for network forensics, threat hunting etc. This helps consolidate and modernize your security operations, making the team more efficient.
Like any other security solution, simply acquiring a new NDR tool does not improve security. In my experience, it is critical for buyers to think through operational impacts when deciding on a technology stack.
Igor Mezic, CTO, MixMode
There are some key questions on the underlying methodologies that should be asked when selecting an NDR solution:
Is the AI NDR system partially or entirely dependent on rules? If so, what is the overhead related to tuning and maintaining the rule set? Attack vectors are changing rapidly in a modern security environment, outpacing rule development efforts by a large margin. Rule-based information can be useful as a context, but not as a primary source of information. The core of the machine learning system should be adaptable to new network conditions and thus independent of static rules.
What is the false positive rate for the detections? What is the false negative rate? The reponse part of NDR is highly dependent on quality of detection. Shutting down a subnet over a false positive can disrupt normal network operation. False positives and negatives abound in rule-based systems and systems that use supervised learning methodology based on labeling. Unsupervised systems based on clustering and Bayesian methods also typically feature high rates of false positives.
What happens when we add a new subnet or a router to the network? Does the NDR system have to re-learn everything again? Learning in an off-the-shelf machine learning systems can take 6-24 months. If that cycle repeats every time a new element is added to the network, the methodology is of limited use. The AI system must adapt seemlessly to new conditions on the network, with no additional extensive learning period.
How easy is it to spoof the detection system? It is well known non-generative machine learning methodologies can be easily spoofed by injection of corrupted data, rendering the system incapable of recognizing a specific attack.
Steve Miller, Principal Applied Security Researcher, FireEye
A NDR solution must enable action in a variety of forms.
Detection events must be distinguished into varying buckets of things to care about. The goal of event priority or criticality is to ensure that important, qualified network detection events are at the top of the to-do list. Your security team can take detection events at the top and respond with more care and urgency with respect to the affected assets.
There must be historical recording for network activity. This may be full packet capture stored for a time period, or merely packet capture in a “time wrinkle,” 5 minutes before and after each network detection event. Solutions should include abstracted network logging, such as Netflow and HTTP event logging. The more logging, the easier an investigation becomes.
Solutions must enable alert-to-action automations. When examining alerts, analysts make routine movements to gather information that aids in validation and response options. Solutions must enable automated data collection associated with alerts in preparation for analyst review, thus reducing manual actions.
Functionally, this means solutions must easily integrate and gather contextual data from other technologies such as: DHCP leases; passive DNS resolutions; threat actor or malware associations; and network/asset “handling” systems that may inoculate or reduce the impact of a malicious event through quarantining, blocking, or manipulation of packets. Automatic provision of contextual data and “handling” options is foundational to taking action, which is often the most laborious part of the human workflow.
Jyothish Varma, Director of Product Management, Nuspire
As organizations look to invest in an MDR, they should consider investing in a solution that has the capability to detect attacks geared to bypass existing security controls. For those solutions with static detection mechanisms, if the exploits used by a hacker don’t trigger a pre-existing rule, no one will know an attack is happening.
For this reason, companies must rely on a solution that augments existing security controls with advanced threat detection and response solutions and dedicated security analysts who are trained to proactively uncover evidence of threats.
Organizations should also consider a solution that detects attacks in real time with experts working around the clock to investigate and respond to alerts technology might have missed. A service that can provide a 24/7/365 security operations center staffed with security analysts ensures you will have full access to experts that can detect attacks as they happen and coordinate incident response plans as necessary. By working with providers that have 24/7 security operations centers, existing security teams will be much more productive and reduce time wasted responding to false positives.
The right MDR solution will not only help you remain secure from cyber threats, but will include these key features and outcomes that will benefit your organization.
Software-related issues continue to plague organizations of all sizes, so IT leaders are turning to application security testing tools for help. Since there are many types of programs available on the market, choosing one is not a straightforward process.
To select the perfect application security testing solution for your business, you need to think about an array of details. We’ve talked to several industry professionals to get insight to help you get started.
Leon Juranic, CTO, DefenseCode
Choosing the right application security testing solution for your business can be a daunting task for any organization. On the surface, they all appear to function similarly and provide a list of vulnerabilities as part of the results.
Prospective users need to look beyond the superficial and closely examine a couple of important factors and capabilities of any application security testing solutions. Clients should focus on True Positive and False Positive (low noise levels) rates to determine how usable a vendor’s product is in the real world.
Having to spend hours triaging the results to determine if they are real is an expensive overhead for any business and undermines confidence in the results also increases the workload of development teams unnecessarily, ultimately even rejection of an AST product.
Secondly, understanding if your workflow can be supported is essential, otherwise, a standalone security product will never be used effectively by development teams. The best approach would be to invest upfront and evaluate a shortlist of vendors to determine if they are a good fit for your business.
Ferruh Mavituna, CEO, Invicti Security
The most important thing is getting real value from your solution in a short time. The goal of application security testing is to get measurable security improvements, not just find issues.
There is no point spending money on a solution that will take months to deploy and get the first results. When selecting your application security solution, time to value in the real world should be your #1 consideration.
Every organization is different, so for web application security, the only approach that works for all sorts of environments is dynamic application security testing. DAST tools scan web applications and APIs by finding vulnerabilities regardless of programming languages, frameworks, libraries, and so on, so it’s much easier to deploy. It doesn’t require the application to be in an active development pipeline and you don’t need to install anything on the server.
To get value from your DAST product, you need results that directly lead to security improvements. This requires accuracy, so the scanner finds all the vulnerabilities that you really have, but also confidence in your results, so you don’t waste time on false alarms. You get a list of real, actionable vulnerabilities and you can start fixing them. Then you can see real value from your investment in days, not months.
James Rabon, Director of Product Management, Micro Focus
During the software development lifecycle, there are several approaches that should be followed in order to maintain the speed needed to keep up with releases today. These approaches, which are crucial for any application security testing tool are testing early, often and fast.
SAST identifies the root causes of security issues and helps remediate the underlying security flaws. An effective SAST tool identifies and eliminates vulnerabilities in source, binary, or byte code, allows you to review scan results in real-time with access to recommendations, line-of-code navigation to find vulnerabilities faster and enable collaborative auditing and is fully integrated with the popular Integrated Developer Environments.
DAST simulates attacks on a running web application. By integrating DAST tools into development, quality assurance and production, it can offer a continuous holistic view. A successful DAST tool offers an effective solution by quickly identifying risk in existing applications, automating dynamic application security testing of any technology, from development through production, validating vulnerabilities in running applications, prioritizing the most critical issues for root-cause analysis and streamlining the process of remediating vulnerabilities
Successful tools should be flexible to modern deployment by being available both on-premise and as a service.
Richard Rogerson, Managing Partner, Packetlabs
Application security testing solutions can be delivered in various ways including as a tool/technology or as a professional service. Automation alone is often not enough because it misses critical areas of applications including business logic, authorization, identity management and several others. This is why professional services are the most comprehensive approach.
- Qualifications: Successful consulting engagements have long relied on experience, but it’s difficult to assess experience before selecting a solution which is why certifications are often the best method to ensure a baseline level of knowledge or practical experience. Certifications to ask for include: GWAPT, GXPN, GPEN, OSWE, OSCE, OSCP.
- Methodology: Having a methodical approach to assessing applications is important as it plays heavily into the consistency and thoroughness of the assessment. There are several open-source and industry-standard testing methodologies including the OWASP Testing Methodology, NIST, PTES, ISSAF and OSSTMM. It is also important to review a checklist of all potential vulnerabilities that your application will be tested for and for this – transparency is key.
- Technology: Technology is important in reducing effort requirements and maximizing code coverage. Technologies include DAST, SAST, and IAST. DAST or dynamic Application security testing is the most common. It evaluates your applications while they’re running over the HTTP protocol. SAST or static application security testing evaluates applications at the line-of-code level. IAST or Interactive application security testing is an evolving technology that combines both approaches. Tools used must include both automated and manual testing capabilities to help the consultant evaluate vulnerabilities directly from the HTTP request or line of code.
- Reporting: The deliverable of an assessment is a report. When evaluating solutions, it is worthwhile to review sample reports and ensure they meet your requirements and offer sufficient information to understand the discovered findings, and more importantly how to fix them.
Dr. Thomas P. Scanlon, Data Science Technical Manager, CERT Division, Software Engineering Institute, Carnegie Mellon University
There is no universal, best tool for application security testing (AST). The most appropriate tool for one business environment may not be as suitable for another. When selecting an AST solution for a business, four of the most pertinent factors are budget, technology stack, source code availability, and use of open-source components.
- Budget – There are many quality open-source AST tools available for little or no cost. Commercial tools typically have more features and capabilities, so they are worth the investment if they fit the budget. A wise approach is to use an open-source tool first to gain domain experience, then shop and compare commercial tools.
- Technology stack – Large commercial AST tools support multiple programming languages, which may save costs when a business uses many technologies. Some smaller AST tools support only one or two languages but provide much deeper coverage, often best if you only need to support those languages.
- Source code availability – If the applications are developed in-house or the developer provides application source code, testing should use static code analysis tools. Without source code, testing should use dynamic analysis tools.
- Use of open-source components – If the application was developed with many third-party, open-source components, a software composition analysis (SCA) tool is a must. SCA tools detect the versions of all such components in use and list all their known vulnerabilities and, often, mitigations.
Susan St. Clair, Senior Cybersecurity Strategist, Checkmarx
Applications are what drive the vast majority of organizations today, so keeping them secure really means keeping your broader business and customers secure. However, before diving head-first into adopting a new AST solution, it’s important to look at what you already have in place.
Do you have established AppSec security policies or a standard that you’d like to adopt? Do you have an established CI/CD process? Are you already using SAST and looking to add more advanced tools like IAST and SCA into the mix? How closely do your AppSec, DevOps, and development teams work together? What are your developers hoping to get out of an AST tool? How about your AppSec team? Having a solid understanding of where you stand in your AST journey is just as important as the solution(s) you use.
At a minimum, ensure that the tools you choose:
- Work with DevOps to automatically trigger security scans and reduce remediation cycles
- Seamlessly integrate into your DevSecOps and CI/CD pipelines
- Are compatible with the framework and databases you’re already working with
- Offer a one-stop shop model so you can get SAST, IAST, SCA, etc. all in one place without needing to mix-and-match across vendors, ultimately reducing TCO
Making AST a priority can set your organization apart, not only in your ability to build better, more secure applications and code, but also by letting your customers know that you place the utmost importance on delivering an end product they can feel confident in using.
SANS Technology Institute’s Internet Storm Center (ISC) has been a valuable warning service and source of critical cyber threat information to internet users, organizations and security practitioners for nearly two decades.
Dr. Johannes Ullrich, the man whose site (DShield.org) became the basis of a SANS project (Incident.org) that later became the Internet Storm Center, has been leading the effort from the start.
Old and new attack trends
“Initially, the Internet Storm Center mostly dealt with firewall logs. In the early days (2000-2008 or so), firewall logs helped us understand the spread of worms like Leaves, Nimda, Blaster, and others,” he told Help Net Security.
“But as soon as home computers started to either use built-in firewalls, or take advantage of home router/firewall combos that are very common today, we saw how things shifted. Instead of actively scanning for systems, attackers tricked users into running the code for them. This lead to the never ending ways of malicious websites and emails that are still dominating.”
More recently, they witnessed the shift from data theft to data encryption by ransomware, as attackers discovered that the person willing to pay most for the data is the original owner.
In addition to this major trend, Dr. Ullrich says that it has become obvious over the years that old attacks and vulnerabilities never quite disappear.
“I think that the vast majority of attacks, even advanced attacks, only use a small handful of actual vulnerabilities, but it’s actually very difficult to obtain real good data to support or reject this thesis. There are a lot of studies that look at different pieces of the puzzle, but it’s hard to find out how it all fits together.“
He also thinks that some very “noisy” attacks are very much overrated and that companies spent a lot of effort and money on defending against attacks that would never have been successful. One of the hard parts in defense is to accurately determine the actual risk posed by a particular attack.
Understanding risks and finding solutions
One thing that’s definitely not overrated? Application control.
“I think it’s one of the most important techniques that has finally made it to the mainstream. Having users execute arbitrary applications is probably one of the most common weaknesses. And yes, a lot of users hate the restrictions, but I find limiting the ‘zoo’ of allowed applications significantly reduces risks,” he explained.
“This is not a new idea. Microsoft has made this a standard optional feature in all currently supported versions of Windows and Apple has to some extent ‘mastered’ this with their mobile device app stores. But it is one of those simple and maybe a bit boring techniques that can always use more attention.”
In the end, though, some of the risks may be a bit overhyped, and it’s important to understand that there is no perfect security.
“In cybersecurity, just like in ‘real world’ security, it is important to understand risks. Just like a shop owner may have a discount table outside the store, well knowing that some of the items may be stolen, and a locked cabinet in the back with high value items, in cyber security we still have to learn how to accurately determine risk and how to spend the right amount of effort on the right problems. The goal isn’t to prevent every breach, but to limit the impact of a breach.”
Getting talented people into cybersecurity
In parallel with working on the Internet Storm Center, Dr. Ullrich became more involved in teaching SANS courses. He started out teaching the Intrusion Detection class – which is the class he still enjoys teaching the most – and added various other classes along the way.
He was also involved in SANS’s effort to establish a graduate school, and the work he has done with the Internet Storm Center has also become part of the graduate schools research program that he’s heading up now as Dean of Research.
With all that in mind, I wondered what his take is on how to attract more young people into the cybersecurity field?
“There has been a lot of progress in the creation of gamified exercises to better identify talent and interest them in cybersecurity,” he noted.
“Cybersecurity is less about the knowledge of specific tools and techniques, but more about a talent to understand complex technological relationships and persevere in solving hard challenges.”
He also stressed that cybersecurity is a field that changes always and quickly.
“I think if you ‘sell’ cybersecurity as a field that offers you a set of challenging, never ending and changing puzzles, you likely address the right crowd. This is not a field where you learn once and ‘stick with it’ (does such a field still exist?). To excel, you also have to be a bit of a risk taker and you can’t always wait for instructions,” he concluded.
What key challenges will the cybersecurity industry be dealing with in the next five years?
Pete Herzog, Managing Director at ISECOM, is so sure that artificial intelligence could be the biggest security problem to solve and the biggest answer to the privacy problem that he cofounded a company, Urvin.ai, with an eclectic group of coders and scientists to explore this.
AI (and machine learning with it) is like a naive child that trusts what you tell it, and is therefore susceptible to fraud, abuse, and tricks, he says. However, it is also like that stubborn, no-bullshit friend who is always going to tell it to you straight.
“From a privacy perspective, AI that controls your personal identity data and medical records will be sure to only give that information to who you tell it to. It has no interest in gossiping with its neighbors about you, and has no greed, vanity, or confirmation bias. We should harness that for protecting our identities and improve how we share it,” he told Help Net Security.
“From a security perspective it has a lot to learn about trust. Or rather, we have a lot to learn on how to program it to trust. It’s the newest, shiniest version of garbage in / garbage out if we don’t learn from our mistakes. At ISECOM we are spending a lot of effort on how we can make security tests for AI and learning how it fits into the OSSTMM framework as a new channel alongside Data Networks, Wireless, Physical, Human, Telecommunications, and Applications.”
Setting up ISECOM
Herzog and his wife Marta Barceló founded the Institute for Security and Open Methodologies in 2001.
ISECOM is a non-profit, open source research organization that maintains the Open Source Security Testing Methodology Manual (OSSTMM), Hacker Highschool (a cybersecurity curriculum for teens in high school) and a security certification authority, all the while operating as a specialty security boutique for securing iconic places that can’t be secured with traditional security products.
Before that they were cybersecurity consultants, so the switch to business owners was a drastic one.
“We jumped full in, no money, and had to find customers from day one. And let me tell you, keeping the connoisseurs of FOSS as happy as the veterans of military-grade security is a balancing act that nobody will get right all of the time,” he explained the challenges they faced.
“With age I learned perspective and humility. And between that and carefully picking my fights I probably protected both the brand and my sanity in the long run.”
In the last decade or so, Herzog also worked in parallel as a security analyst, writer, advisor or CISO with some well and lesser known security companies.
Cybersecurity industry problems
With all these experiences to draw on, we wondered what’s his opinion on the cybersecurity industry as a whole.
He believes one of the problems is the extreme fragmentation of what makes security.
“This fragmentation of specific skills and specific technology creates a differentiation and demand for niche products that focus on one, specific thing. Yet you’re supposed to implement it all, which entails hiring all the people and buying all the products to do it all. Consultants, trainers, universities, and government organizations then follow the crowd on the ‘more is better’ security and this fractures the market more and more until it seems you can’t be secure unless you have the blue spiral thing to stop the blue spiral packets,” he explained.
“Basic security analysis has you making decisions on at least 16 different things for each connection allowed, and a typical organization has thousands of connections to the outside and hundreds of thousands inside. Add web and mobile apps to the mix and you push the number up exponentially. Therefore, even the basic stuff is complicated and to do it thoroughly is exhausting – which is why we buy products to help. But if they fracture the products into thousands of little pieces of technology and operations all with special names we need to continuously re-learn then we’re back to it being as bad as not having the products at all. And that’s what’s wrong with the cybersecurity industry at the moment: we really are confusing the hell out of people as to what they actually need to have and do to be secure. It’s so bad that you can’t buy a penetration test today and know what you’ll get. Imagine buying an oil change like that! It’s ridiculous, confusing, and hurts everyone.”
He doesn’t assign any blame on cybersecurity salespeople, though.
“They see the pain their customers go through and how badly they need security. From their perspective it’s like they see the breach already happening, just really slowly – and they don’t want to have to see another breach. Additionally, everyone working in cybersecurity knows that each breach gives more resources to an enemy and eventually it’s overwhelming for everyone, even the salespeople,” he noted.
He says that the cybersecurity industry has room for more innovation, but that the real problem is not a general lack of it, but the fact that attackers have at their disposal such a huge number of attack combinations that a product-based defense today is not enough. And cyber hygiene can only can somewhat reduce the number of available attack types but not enough to help the overburdened security staff secure everything.
Finally, he believes that people should not be a link in the security chain.
“People are our assets, not our security. The truth is that there is nothing that can’t be made more secure by removing the person from the process, so plan for them not being a link in your security chain and you’ll be more secure,” he concluded.
A Security Information and Event Management (SIEM) solution collects and analyzes activity from numerous resources across your IT infrastructure. A SIEM can provide information of critical importance, but how do you find one that fits your organization?
To select an appropriate SIEM solution for your business, you need to think about a variety of factors. We’ve talked to several industry professionals in order to get insight to help you get started.
Jae Lee, Senior Director, Elastic Security
SIEM is a mature product category and continues evolving. However, SIEM needs to enable teams to evolve, as SecOps transforms from “traditional” to “adaptive.”
Let’s start with people — traditional skillsets are based on tools (e.g., vulnerability, firewall, IDS/IPS, etc.), but broader skillsets are needed to help practitioners adapt quickly. Manipulating and analyzing data, performing collaborative research, understanding adversaries/tradecraft — SIEM must help augment and develop these skillsets.
Next is process — with improved skills, alerts no longer rule (unless allowed to), and pre-defined, static SOPs / playbooks alone are not enough. Teams now require real-time analysis to hunt — including performing research, reverse-engineering and simulating threats, and more. Context is everything. Hunting and operationalizing effectively requires full visibility — not in a separate tool, but within the SIEM.
Finally, technology. Full visibility isn’t just broad coverage, but fast insights. Also, detections need to work OOTB. Consider endpoint — there, OOTB detections have high accuracy. The same principle should apply in SIEM, without requiring every analyst to be an expert rule author. SIEM isn’t just “technology” — it needs real-world-validated security content.
As SecOps matures, major investments are often required for the care and feeding of a SIEM. You have to stop threats and justify your investment. Give yourself the runway to be confident that once deployed the SIEM can meet your fast-evolving needs, and ask hard questions around scale and flexibility — from detections to integrations, to deployment options, to pricing metrics.
Christopher Meenan, Director, QRadar Product Management and Strategy, IBM Cloud and Cognitive Software
The first thing to think about is what use cases you need to address. Your requirements will look very different depending on whether you need to secure your organization during a cloud transformation, build a unified IT and OT security operations program, or simply address compliance. Your use cases will drive requirements around integrations, use case content, analytics, and deployment methods.
Ask the vendors how they can help address your requirements. Understand which integrations and use case content are included, versus which require a separate license or custom development. Understand what analytics are available and how those analytics are used to detect known and unknown threats. Ask what frameworks, such as MITRE ATT&CK, are natively supported.
If you’re like most companies, your team is understaffed – which means you need usable products that help shorten the learning curve for new analysts and make your experienced team members more efficient. Ask how each solution measurably increases efficiency during the detection, investigation and response processes. Also ask about SaaS deployments and MSSP partnerships if to reduce on-going management requirements.
Most importantly, don’t be shy. Ask for a proof of concept to make sure the tools you’re considering will work for you.
Stephen Moore, Chief Security Strategist, Exabeam
The most seasoned and well-resourced security teams can be easily overwhelmed by the volume of organizational alerts they receive in a day and that complexity – coupled with the inherent difficulties of detecting credential-based attacks – means many SOC analysts now experience several pains that traditional SIEMs can’t solve, including alert fatigue, a lack of skilled analysts and lengthy investigation times.
Many organizations are now migrating their SIEM to the cloud, which allows analysts to harness greater compute power, sift through, interpret and operationalize SIEM data. Now more of their time is spent finding bad things versus platform and server support. But to choose the right SIEM for ‘the business’ you need to consult with it. You need to align its capabilities to the goals, concerns and expectations of the business – which will undoubtedly have changed over the last few months. Above all else, this requires taking the time to ask the questions.
Then, make choices based on known adversary behavior and breach outcomes – focusing specifically on credentials – ensuring your platform is adversary adaptable and object centered. Ask, will it improve your time to answer (TTA) questions, such as ‘which account or asset is associated with this alert?’ or ‘what happened before, during, and after?’
Finally, any solution needs to help your SOC analysts focus on the right things. Key to this is automation – both in the form of incident timelines that display the full scope, acting as the storyboard of the incident, as well as an automated incident response capability for when action must be taken to return the environment to normal. Providing automation of the necessary investigation steps is the most important thing an incident responder can have so they may take action faster and most importantly minimize the risk of an incomplete response.
Wade Woolwine, Principal Security Researcher, Rapid7
While the term SIEM has “security” as the very first word, event and log management isn’t just for security teams.
When organizations look to invest in a SIEM or replace an existing SIEM, they should consider use cases across security, IT/cloud, engineering, physical security, and any other group who may benefit from a centralized aggregation of logs. Once the stakeholders have been identified, documenting the specific logs, their sources, and any use cases will ensure the organization has a master list of needs against which to evaluate vendors.
Organizations should also recognize that the use cases will change over time and new use cases will be implemented against the SIEM, especially within the security team. For this reason, organizations should also consider the following as hard requirements to support future growth:
- Support for adding and categorizing custom event sources by your own team
- Support for cloud based event sources
- Field searching level with advanced cross-data-type search functionality and regular expression support
- Saved searches with alerting
- Saved searches with dynamic dashboard reporting
- Ability to integrate threat feeds
- Support for automation platform integration
- API support
- Multi-day training included with purchase
Jesper Zerlang, CEO, LogPoint
As the complexity of enterprise infrastructures is increasing, a key component of a Modern SIEM solution is the ability to capture data from everywhere. This includes data on-premises, in the cloud, and from software, including enterprise applications like SAP. In today’s complex threat landscape, a SIEM that fully integrates UEBA and allows enterprises to relevantly enhance security analytics instantly is an absolute necessity.
The efficiency of your SIEM solution is entirely dependent on the data you feed into it. If the license model of a SIEM solution relies on the volume of data ingested or the number of transactions, the cost will be ever-increasing due to the overall growth in data volumes. As a consequence, you may select to skip SIEM coverage for certain parts of your infrastructure to cut costs, and that can prove fatal.
Choose a SIEM with a license model that that support the full digitalization of your business and allows you to fully predict the future cost. This will ensure that your business needs are aligned by your technology choices. And last but not least: Select a SIEM solution that has documented short time-to-value and complete your SIEM project on time. SIEM deployments, whether initial implementation or a replacement, are generally considered complicated and time-consuming. But they certainly don’t have to be.
Instituting an in-house cyber threat intelligence (CTI) program as part of the larger cybersecurity efforts can bring about many positive outcomes:
- The organization may naturally switch from a reactive cybersecurity posture to a predictive, proactive one.
- The security team may become more efficient and better prepared for detecting threats, preventing security incidents and data breaches, and reacting to active cyber intrusions.
- The exchange of pertinent threat intelligence with other organizations may improve collaboration and preparedness.
But these positive results are dependent of several things.
Some may think that, for example, cybersecurity is directly proportionate to the amount of threat intelligence they collect.
In reality, though, threat intelligence information can only serve their organization to the extent that they are able to digest the information and rapidly operationalize and deploy countermeasures.
“You may collect information on an ongoing or future threat to your organization to include who the threat actor is, what are they going after, what is the tactic they will utilize to get in your network, how are they going to move laterally, how are they going to exfil information and when will the activity take place. You can collect all the relevant threat information but without the infrastructure in place to analyze the large amount of data coming in, the organization will not succeed in successfully orienting themselves and acting upon the threat information,” Santiago Holley, Global Threat Intelligence Lead at Thermo Fisher Scientific, told Help Net Security.
Working towards a threat intelligence program
Holley has worked in multiple threat intelligence and cyber positions over the past ten years, including a stint as a Threat Intelligence Lead with the FBI, and this allows him to offer some advice to security leaders that have been tasked with setting up a robust threat intelligence program for their organization.
One of the first steps towards establishing a threat intelligence program is to know your risk tolerance and set your priorities early, he says. While doing that, it’s important to keep in mind that it’s not possible to prevent every potential threat.
“Understand what data is most important to you and prioritize your limited resources and staff to make workloads manageable and keep your company safe,” he advised.
“Once you know your risk tolerance you need to understand your environment and perform a comprehensive inventory of internal and external assets to include threat feeds that you have access to. Generally, nobody knows your organization better than your own operators, so do not go on a shopping spree for tools/services without an inventory of what you do/don’t have.
After all that’s out of the way, it’s time to automate security processes so that you can free your limited talented cybersecurity personnel and have them focus their efforts where they will be most effective.
“Always be on the lookout for passionate, qualified and knowledge-thirsty internal personnel that WANT to pivot to threat intelligence and develop them. Having someone that knows your organization, its culture, people and wants to grow goes a long way compared to the unknowns of bringing external talent,” he opined.
The importance of explaining risk
To those who are still fighting to get buy-in for a TI program from the organization’s executives and board members, he advises providing contextualized threat intelligence.
“You must put potential threats in terms that are meaningful to your audience such as how much risk a threat poses in terms of potential damage alongside which assets and data are at risk,” he explained.
“Many times business managers are focused on generating revenue and may see threat intelligence as an unnecessary expense. It is important for security leaders to communicate risk to their business managers and how those contribute to unnecessary cost and time delays if not addressed.”
He also advises getting to know the people they are working with and start building a professional working relationship. “The success of the program correlates to the strength of your team and how successful they are in collaborating and communicating with business managers.”
Cyber threat intelligence is one of the key tools information security operation centers (SOCs) use to carry out their mission. While helpful, it’s also one of the many little things that add to the mounting pile of stress SOC teams often feel.
SOC analysts are tasked with keeping up with the organization’s security needs and getting end users to understand cybersecurity risks and change their behavior, but are often dealing with an overwhelming workload and constant emergencies and disruptions that take analysts away from their primary tasks.
Burnout is often lurking and ready to “grab” SOC team members, so Holley advises them to implement a number of techniques to manage stress:
- Identify the problem. Understand what is specifically causing your stress in the first place, a good way of doing this is via root cause analysis. Peel the layers of the problem and understand the root
- Control your time. Take control of your time by blocking your calendar and give yourself time to focus on your own tasks and avoid being oversaturated with meetings
- Pick your battles. If you are going to go to war, make sure it is worth it. Avoid being dragged into confrontations that ultimately do not matter
- Stay healthy. Working out has many benefits when it comes to stress reduction, it gives you the opportunity to focus on something for YOU.
“Today’s cyber security environment is challenging and requires analysts to react to changes quickly and effectively. It seems that there is a never-ending demand on flexible intellectual skills and the ability to analyze information and integrate different sources of knowledge to address challenges,” Holley noted.
His own preferred thinking process for making the most appropriate decisions as quickly as possible is the OODA loop (Observe, Orient, Decide, Act).
“Risk management and being able to sort through large amounts of information and prioritize what needs to be actioned right away helps with problem solving. Keeping a cool head during difficult situations aids critical thinking but also allows for professional interactions with coworkers and stakeholders,” he concluded.
Michael Hamilton, CISO of CI Security, has worked in the information security industry for 30 years. As former CISO for the City of Seattle, he managed information security policy, strategy, and operations for 30 government agencies.
In this interview with Help Net Security, Michael discusses ransomware attacks and offers insight on how they will evolve in the near future.
What are some of the most interesting ransomware trends you’re seeing this year?
Double extortion. Combined with data exfiltration to incentivize paying ransom, and monetizing data in auctions if not. Corporations can buy competitor’s info this way – it’s not all organized crime leveraging things stolen by organized crime.
The “intelligence” built into ransomware campaigns. They used to be smash and grab. Now they gain persistence, elevate privilege, and identify and disrupt data and computing that are critical to an organization’s continuity of operations.
The commoditization of ransomware as a service, and how that has played into current economic distress, allowing people to get into the crime business – mainly out of necessity.
Ransomware attacks increasingly target cities and municipalities. What kind of damage can they do exactly?
The services operated by local governments are critical on the scale that we live our daily lives. Water purification, waste treatment, storm water removal, traffic management, communication systems for law enforcement and public safety, emergency management, election systems, and 9-1-1 are all enabled by, and in some cases dependent on IT. So the potential impacts of disrupting local government operations can be civil unrest, public health emergencies… all the way up to loss of life.
Think about it this way: your toilet can stop flushing and cause a public health emergency, traffic lights can be all turned to flashing red and impede first responders, drinking water can be rendered suspect (ref recent Israeli water utility compromised by Iranians, had control of Chlorine injection), and county governments can be knocked over by ransomware as a service in the middle of an election.
What advice would you give to the CISO of a major city when it comes to protecting the IT infrastructure against ransomware attacks?
Because of the criticality of the services you provide, it is important to address the source of compromises that lead to ransomware: your users. Work to rescind the policy of de-minimis use (using government technology for personal purposes), and institute a policy of all personal use on a personal device. Period. Measurements I’ve made while CISO of the City of Seattle indicate that 40% of the compromised assets were due to the use of personal email.
Second, because credentials are under attack, use multi-factor authentication everywhere and nullify that vector.
Lastly, your preventive controls will fail, and you’re left with identifying a compromised asset through its behavior. You have something between 3 and 12 days (according to reports) to purge the compromise from the environment before it finds the “good stuff” to encrypt/exfiltrate. Your monitoring must be comprehensive, and someone must be assigned to follow up and investigate security alerts. Without proper people resourcing, your technology is yelling into the wind. Ask Target how this works out.
Ransomware as a Service continues to be available at different price points, making it very easy for inexperienced cybercriminals to get started quickly. What should be done to curb the surge of such services?
Insurance companies want the cheapest way out of a ransomware event, and that is frequently lower than the cost of full restoration and rebuild. This “market force” should be countered with strong disincentives for insurance companies to pay the extortion demand – this could be done through the rule-making process with state insurance commissions, however that would mean 50 separate actions.
Credit-card transactions that pay for cybercrime as a service could be flagged, but this is easily circumvented with cryptocurrency.
The other way is to defend forward and go after the RaaS operations, and frankly I think the military should be involved. They shut down the Russian disinformation campaign at the end of the 2016 election, and Israel shot a missile into a building where Palestinian attackers were operating, so there’s a bit of precedent. We can’t use missiles, but government-run DDoS operations against dark market vendors might move that needle. However, without an extremely press-worthy disincentive, one that really gets the attention of the actors, this is going to continue.
How is ransomware going to evolve in the near future? What tactics are cybercriminals most likely going to implement?
Next stop is operational technologies, and specifically things like robotic manufacturing (we’ve already seen this). The double extortion and data auctions will become more prevalent, making it nearly impossible to avoid paying.
Apart from criminal activity, we are seeing our first legal action against a company that was hit with ransomware, its customer data exfiltrated and samples made available, and one very significant customer of this company is suing because their intellectual property was stolen and put up for sale. There’s going to be more of this. I think that rather than regulatory action, it will be litigation and the expectations of business partners for verifiable controls that move the needle.
The percentage of companies admitting to suffering a mobile-related compromise has grown, despite a higher percentage of organizations deciding not to sacrifice the security of mobile devices to meet business targets.
To make things worse, the C-suite is the most likely group within an organization to ask for relaxed mobile security protocols – despite also being highly targeted by cyberattacks.
In order to select a suitable mobile security solution for your business, you need to consider a lot of factors. We’ve talked to several industry professionals to get their insight on the topic.
Liviu Arsene, Global Cybersecurity Analyst, Bitdefender
A business mobile security solution needs to have a clear set of minimum abilities or features for securing devices and the information stored on them, and for enabling IT and security teams to remotely manage them easily.
For example, a mobile security solution for business needs to have excellent malware detection capabilities, as revealed by third-party independent testing organizations, with very few false positives, a high detection rate, and minimum performance impact on the device. It needs to allow IT and security teams to remotely manage the device by enabling policies such as device encryption, remote wipe, application whitelisting/blacklisting, and online content control.
These are key aspects for a business mobile security solution as it both allows employees to stay safe from online and physical threats, and enables IT and security teams to better control, manage, and secure devices remotely in order to minimize any risk associated with a compromised device. The mobile security solution should also be platform agnostic, easily deployable on any mobile OS, centrally managed, and allow users to switch from profiles covering connectivity and encryption (VPN) settings based on the services the user needs.
Fennel Aurora, Security Adviser at F-Secure
Making any choice of this kind starts from asking the right questions. What is your company’s threat model? What are your IT and security management capabilities? What do you already know today about your existing IT, shadow IT, and employees bring-your-own-devices?
If you are currently doing nothing and have little IT resources internally, you will not have the same requirements as a global corporation with whole departments handling this. As a farming supplies company, you will not face the same threats, and so have the same requirements, as an aeronautics company working on defense contracts.
In reality, even the biggest companies do not systematically do all of the 3 most basic steps. Firstly, you need to inventory your devices and IT, and be sure that the inventory is complete and up-to-date as you can’t protect what you don’t know about. You also need at minimum to protect your employees’ devices against basic phishing attacks, which means using some kind of AV with browsing protection. You need to be able to deploy and update this easily via a central tool. A good mobile AV product will also protect your devices against ransomware and banking trojans via behavioral detection.
Finally, you need to help people use better passwords, which means helping them install and start using a password manager on all their devices. It also means helping them get started with multi-factor authentication.
Jon Clay, Director of Global Threat Communications, Trend Micro
Many businesses secure their PC’s and servers from malicious code and cyber attacks as they know these devices are predominately what malicious actors will target. However, we are increasingly seeing threat actors target mobile devices, whether to install ransomware for quick profit, or to steal sensitive data to sell in the underground markets. This means is that organizations can no longer choose to forego including security on mobile devices – but there are a few challenges:
- Most mobile devices are owned by the employee
- Most of the data on the mobile device is likely to be personal to the owner
- There are many different device manufacturers and, as such, difficulties in maintaining support
- Employees access corporate data on their personal devices regularly
Here are a few key things that organizations should consider when looking to select a mobile security solution:
- Lost devices are one reason for lost data. Requiring users to encrypt their phones using a passcode or biometric option will help mitigate this risk.
- Malicious actors are looking for vulnerabilities in mobile devices to exploit, making regular update installs for OS and applications extremely important.
- Installing a security application can help with overall security of the device and protect against malicious attacks, including malicious apps that might already be installed on the device.
- Consider using some type of remote management to help monitor policy violations. Alerts can also help organizations track activities and attacks.
Discuss these items with your prospective vendors to ensure they can provide coverage and protection for your employee’s devices. Check their research output to see if they understand and regularly identify new tactics and threats used by malicious actors in the mobile space. Ensure their offering can cover the tips listed above and if they can help you with more than just mobile.
Jake Moore, Cybersecurity Specialist, ESET
Companies need to understand that their data is effectively insecure when their devices are not properly managed. Employees will tend to use their company-supplied devices in personal time and vice versa.
This unintentionally compromises private corporate data, due to activities like storing documents in unsecure locations on their personal devices or online storage. Moreover, unmanaged functions like voice recognition also contribute to organizational risk by letting someone bypass the lock screen to send emails or access sensitive information – and many mobile security solutions are not fool proof. People will always find workarounds, which for many is the most significant problem.
In oder to select the best mobile security solution for your business you need to find a happy balance between security and speed of business. These two issues rarely go hand in hand.
As a security professional, I want protection and security to be at the forefront of everyone’s mind, with dedicated focus to managing it securely. As a manager, I would want the functionality of the solution to be the most effective when it comes to analyzing data. However, as a user, most people favor ease of use and convenience at the detriment of other more important factors.
Both users and security staff need to be cognizant of the fact that they’re operating in the same space and must work together to strike the same balance. It’s a shared responsibility but, importantly, companies need to decide how much risk they are willing to accept.
Anand Ramanathan, VP of Product Management, McAfee
The permanent impact of COVID-19 has heightened attacker focus on work-from-home exploits while increasing the need for remote access. Security professionals have less visibility and control over WFH environments where employees are accessing corporate applications and data, so any evaluation of mobile security should be based on several fundamental criteria:
- “In the wild security”: You don’t know if or how mobile devices are connecting to a network at any given time, so it’s important that the protection is on-device and not dependent on a connection to determine threats, vulnerabilities or attacks.
- Comprehensive security: Malicious applications are a single vector of attack. Mobile security should also protect against phishing, network-based attacks and device vulnerabilities. Security should protect the device against known and unknown threats.
- Integrated privacy protection: Given the nature of remote access from home environments, you should have the ability to protect privacy without sending any data off the device.
- Low operational overhead: Security professionals have enough to do in response to new demands of supporting business in a COVID world. They shouldn’t be obligated to manage mobile devices differently than other types of endpoint devices and they shouldn’t need a separate management console to do so.