How do I select an endpoint protection solution for my business?

Endpoint protection has evolved to safeguard from complex malware and evolving zero-day threats.

To select an appropriate endpoint protection solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.

Theresa Lanowitz, Head of Evangelism, AT&T Cybersecurity

select endpoint protection solutionCorporate endpoints represent a top area of security risk for organizations, especially considering the shift to virtual operations brought on by COVID-19. As malicious actors target endpoints with new types of attacks designed to evade traditional endpoint prevention tools, organizations must seek out advanced endpoint detection and response (EDR) solutions.

Traditionally, enterprise EDR solutions carry high cost and complexity, making it difficult for organizations to implement EDR successfully. While many security teams recognize the need for EDR, most do not have the resources to manage a standalone endpoint security solution.

For this reason, when selecting an EDR solution, it’s critical to seek a unified solution for threat detection, incident response and compliance, to be incorporated into an organization’s existing security stack, eliminating any added cost or complexity. Look for endpoint solutions where security teams can deploy a single platform that delivers advanced EDR combined with many other essential security capabilities in a single pane of glass, in an effort to drive efficiency of security and network operations.

Overall, organizations should select an EDR solution that enables security teams to detect and respond to threats faster while eliminating the cost and complexity of maintaining yet another point security solution. This approach can help organizations bolster their cybersecurity and network resiliency, with an eye towards securing the various endpoints used in today’s virtual workforce.

Rick McElroy, Cyber Security Strategist, VMware Carbon Black

select endpoint protection solutionWith the continuously evolving threat landscape, there are a number of factors to consider during the selection process. Whether a security team is looking to replace antiquated malware prevention or empower a fully-automated security operations process, here are the key considerations:

  • Does the platform have the flexibility for your environment? Not all endpoints are the same, therefore broad coverage of operating systems is a must.
  • Does the vendor support the MITRE ATT&CK Framework for both testing and maturing the product? Organizations need to test security techniques, validate coverage and identify gaps in their environments, and implement mitigation to reduce attack surface.
  • Does it provide deeper visibility into attacks than traditional antivirus? Organizations need deeper context to make a prevention, detection or response decision.
  • Does the platform provide multiple security functionality in one lightweight sensor? Compute is expensive, endpoint security tools should be as non-impactful to the system as possible.
  • Is the platform usable at scale? If your endpoint protection platform isn’t centrally analyzing behaviors across millions of endpoints, it won’t be able to spot minor fluctuations in normal activity to reveal attacks.
  • Does the vendor’s roadmap meet the future needs of the organization? Any tool selected should allow teams the opportunity for growth and ability to use it for multiple years, building automated processes around it.
  • Does the platform have open APIs? Teams want to integrate endpoints with SEIM, SOAR platforms and network security systems.

David Ngo, VP Metallic Products and Engineering, Commvault

select endpoint protection solutionWith millions working remotely due to COVID-19, laptop endpoints being used by employees while they work from home are particularly vulnerable to data loss.

This has made it more important than ever for businesses to select a strong endpoint protection solution that:

  • Lowers the risk of lost data. The best solutions have automated backups that run multiple times during the day to ensure recent data is protected and security features such as geolocation and remote wipe for lost or stolen laptops. Backup data isolation from source data can also provide an extra layer of protection from ransomware. In addition, anomaly detection capabilities can identify abnormal file access patterns that indicate an attack.
  • Enables rapid recovery. If an endpoint is compromised, the solution should accelerate data recovery by offering metadata search for quick identification of backup data. It’s also important for the solution to provide multiple granular restore options – including point in time, out of place, and cross OS restores – to meet different recovery needs.
  • Limits user and IT staff administration burdens. Endpoint solutions with silent install and backup capabilities require no action from end users and do not impact their productivity. The solution should also allow users and staff to access backup data, anytime, anywhere, from a browser-enabled device, and make it possible for employees to search and restore files themselves.

James Yeager, VP of Public Sector, CrowdStrike

select endpoint protection solutionDecision-makers seeking the best endpoint protection (EPP) solution for their business should be warned legacy security solutions are generally ineffective, leaving organizations highly susceptible to breaches, placing a huge burden on security teams and users.

Legacy tools, engineered by on-premises architectures, are unable to keep up with the capabilities made available in a modern EPP solution, like collecting data in real-time, storing it for long periods and analyzing it in a timely manner. Storing threat telemetry data in the cloud makes it possible to quickly search petabytes of data in an effort to glean historical context for activities running on any managed system.

Beware of retrofitted systems from vendors advertising newer “cloud-enabled” features. Simply put, these “bolt-on” models are unable to match the performance of a cloud-native solution. Buyers run the risk of their security program becoming outdated with tools that cannot scale to meet the growing needs of today’s modern, distributed workforce.

Furthermore, comprehensive visibility into the threat landscape and overall IT hygiene of your enterprise are foundational for efficient security. Implementing cloud-native endpoint detection and response (EDR) capabilities into your security stack that leverages machine learning will deliver visibility and detection for threat protection across the entire kill chain. Additionally, a “hygiene first” approach will help you identify the most critical risk areas early-on in the threat cycle.

Delivering and maintaining security at the speed of digital transformation

Dustin Rigg Hillard, CTO at eSentire, is responsible for leading product development and technology innovation. His vision is rooted in simplifying and accelerating the adoption of machine learning for new use cases.

In this interview Dustin talks about modern digital threats, the challenges cybersecurity teams face, cloud-native security platforms, and more.

maintaining security digital transformation

What types of challenges do in-house cybersecurity teams face today?

The main challenges that in-house cybersecurity teams have to deal with today are largely due to ongoing security gaps. As a result, overwhelmed security teams don’t have the visibility, scalability or expertise to adapt to an evolving digital ecosystem.

Organizations are moving toward the adoption of modern and transformative IT initiatives that are outpacing the ability of their security teams to adapt. For security teams, this means constant change, disruptions with unknown consequences, increased risk, more data to decipher, more noise, more competing priorities, and a growing, disparate, and diverse IT ecosystem to protect. The challenge for cybersecurity teams is finding effective ways to deliver and maintain security at the speed of digital transformation, ensuring that every new technology, digital process, customer and partner interaction and innovation is protected.

Cybercrime is being conducted at scale, and threat actors are constantly changing techniques. What are the most significant threats at the moment?

Threat actors, showing their usual agility, have shifted efforts to target remote workers and take advantage of current events. We are seeing attackers exploiting user behavior by misleading users into opening and executing a malicious file, going to a malicious site or handing over information, typically using lures which create urgency (e.g., by masquerading as payment and invoice notifications) or leverage current crises and events.

What are the main benefits of cloud-native security platforms?

A cloud-native platform offers important advantages over legacy approaches—advantages that provide real, important benefits for cybersecurity providers and the clients who depend on them.

  • A cloud-native architecture is more easily extensible, which means more features, sooner, to enable analysts and protect clients
  • A cloud-native platform offers higher performance because the microservices inside it can maximally utilize the cloud’s vast compute, storage and network resources; this performance is necessary to ingest and process the vast streams of data which need to be processed to keep up with real-time threats
  • A cloud-native platform can effortlessly scale to handle increased workloads without degradation to performance or client experience
Security platforms usually deliver a variety of metrics, but how does an analyst know which ones are meaningful?

The most important metrics are:

  • How platform delivers security outcomes
  • How many threats were stopped with active response?
  • How many potentially malicious connections were blocked?
  • How many malware executions were halted?
  • How quickly was a threat contained after initial detection?

Modern security platforms help simplify data analytics by delivering capabilities that amplify threat detection, response and mitigation activities; deliver risk-management insights; and help organizations stay ahead of potential threats.

Cloud-native security platforms can output a wide range of data insights including information about threat actors, indicators of compromise, attack patterns, attacker motivations and capabilities, signatures, CVEs, tactics, and vulnerabilities.

How can security teams take advantage of the myriad of security tools that have been building in the organization’s IT ecosystem for many years?

Cloud-native security platforms ingest data from a wide variety of sources such as security devices, applications, databases, cloud systems, SaaS platforms, IoT devices, network traffic and endpoints. Modern security platforms can correlate and analyze data from all available sources, providing a complete picture of the organization’s environment and security posture for effective decision-making.

Ransomware recovery: Moving forward without backing up

Phishing scams tied to COVID-19 show no signs of stopping. More than 3,142 phishing and counterfeit pages went live every day in January, and by March, the number had grown to 8,342. In mid-April, Google reported they saw more than 18 million pandemic-related malware and phishing emails each day over the course of just a single week. By mid-May, a new high in cybercriminal activity was set and coronavirus clearly had played a major role.

The main cause of data breaches continues to be human error. With so many employees suddenly working from home – cut off from everyday contact with IT – the pandemic has offered hackers an ideal period to exploit a lack of security vigilance. Outdated home software, forgotten updates, skipped patches… Aside from a welcome mat, hackers couldn’t have a more gracious invitation or an easier path into a company.

IT concern and chaos

For IT, the biggest concern with a remote workforce is the inability to control the network in a traditional sense. Perhaps their greatest fear is a ransomware attack on company data made possible by users connected through their VPN and attaching to file shares.

With the pandemic, more people are seeking information and visiting websites with charts and graphs holding related statistics. Sadly, bogus or malicious sites take advantage of the situation. Making matters worse, networks are often shared with others, such as the employee’s children, who use them for recreational activities but aren’t so savvy at identifying threats. Most ransomware attacks are the result of visiting hacked or malicious websites or clicking on an infected email attachment.

Attackers have been taking advantage of remote work “chaos” and the onslaught is unsettling. We’re seeing an uptick in gathering attempts, raising malicious code and ransomware instances because people are visiting places they normally wouldn’t and hackers are leveraging changes in work habits.

Malware is increasingly holding company resources and data for ransom, which in addition to that expense can cause costly downtime, negatively impact a company’s reputation and more.

Backup and disaster recovery (DR) technologies have progressed in recent years, reducing recovery point and time objectives (RPO and RTO). However, they haven’t kept pace with hackers, and the backup process is a significant administrative and management burden.

One step forward, two steps back

Ransomware attacks are extremely disruptive. IT needs to figure out how the infection started and see if they can prevent it from happening again. It’s imperative to have a reliable backup copy from before the infection, but in some cases, ransomware can even encrypt those along with the original files. A lot of details need to be worked out.

The problem is, traditional backups – while often an organization’s last line of defense against a disaster – are outdated and cost companies a lot of time and money. Configuring incremental and full backup schedules or pulling backups across a WAN to a central site is cumbersome at best, unreliable at worst. So is babysitting backups to find out if they worked, and rotating and refreshing tape and disk media.

In the end, it still takes days or hours to recover.

Not only does this pose significant administrative and management burden, backup remains an expensive bolt-on to storage systems. In large organizations, entire teams are dedicated to managing the backup process and ensuring their integrity. Faulty or corrupt ones remain a significant problem, in fact, ransomware can deliver code that works its way through systems over time before attacking data.

Unfortunately, backing up to just before the point-of-origin could actually set the attack in motion all over again.

Getting ahead without backing up

In a perfect world, you wouldn’t need to buy a data protection solution and your storage system would protect itself. But the world is not perfect and that’s why enterprises deploy a storage system with backup and DR. That said, though, today there actually isn’t a need for separate storage and backup systems.

By taking advantage of the cloud, global file systems can enable companies of all sizes to store, access and share file data without further backup and DR systems. They can take snapshots to capture changes – every five minutes for active data – which are sent to the cloud where the gold copy is kept. The global file system can store these in the cloud without any significant additional cost.

If snapshots are written to the cloud as Write Once Read Many (WORM) objects, data is prevented from being corrupted or overwritten. With separate metadata versions for each snapshot, restoring a file or even multiple terabytes of data takes just seconds, eliminating a full restore or migration.

What makes the process fast is a you only need to point to an earlier version of the files; there’s no need to undergo a slow copy. Because the gold copy is incremental, you’ll likely find a version that was captured just minutes before the point of infection.

Simply put, self-protecting, cloud-based global file systems do away with the need for a separate backup system. With this approach, not only does IT no longer need to dedicate time and resources to backup management, they gain better RPOs and RTOs and the ability to recover from ransomware attacks in minutes. For many IT leaders in 2020, the first step to effectively countering ransomware and ensuring their enterprises continue to move forward will be to stop backing up.

How do I select a DMARC solution for my business?

Domain-based Message Authentication, Reporting & Conformance (DMARC), is an email authentication, policy, and reporting protocol. It builds on the SPF and DKIM protocols to improve and monitor protection of the domain from fraudulent email.

To select a suitable DMARC solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.

Scott Croskey, Global CISO, Cipher

select DMARC solutionDMARC solutions add security to business email systems by ensuring DKIM and SPF standards are in place to mitigate risks from fraudulent use. They evaluate every inbound and outbound email for these security standards and can integrate with Secure Email Gateway solutions to block malicious activity.

When evaluating DMARC solutions, you should focus on vendors that employ the following features:

  • Cloud-based (SaaS) deployment. This eases the burden on company IT teams, allowing for the solution to be easily deployed and configured with out-of-the-box security policies.
  • Domain diagnosis. This will ensure your business is aware of any domain vulnerabilities, many of which can be common for SMBs to overlook and consequently increase their risk.
  • User friendly dashboard. This will ensure your team does not need a lot of time to understand how the solution works.

For larger companies, you should also consider vendors that employ:

  • Forensic reporting. This allows for detailed information on why emails may have failed DMARC checks and allow for additional system tuning.
  • DNS record change tracking. This allows for additional insight into malicious activity.
  • API integration. Large companies typically have internal dashboards and workflows. API Integration with the DMARC solution will allow you to tailor the solution into your enterprise reporting & analysis tools.

Len Shneyder, VP of Industry Relations, Twilio

select DMARC solutionA company that wants to achieve DMARC enforcement should consider a walk, crawl, run approach as DMARC doesn’t work unless you have published SPF and DKIM. DMARC essentially communicates a policy and set of prescriptive actions to a receiving domain on what to do if an email fails an SPF or DKIM check.

If a company has the technical aptitude to publish SPF and DKIM then it stands to reason they can publish one more policy. However, when a sophisticated enterprise begins working with third parties that want to send emails on behalf of that company, in the form of an email service provider for marketing communications, a ticketing system, an internal HR tool, or all of the above and more, then the DMARC policy becomes much more complicated and a company might consider turning to one of a small field of companies that have automated the process of reaching enforcement.

The question of which provider to choose really rests around the complexity and breath of your company. Different providers will be suited to different sized companies—however, if you haven’t reached that scale yet, then there’s no reason why you couldn’t do it yourself.

Chuck Swenberg, SVP Strategy, Red Sift

select DMARC solutionIt used to be that interpreting DMARC reports, which provide a view of mail authentication results of every IP that’s being used to send mail on behalf of your domain, was sufficient. However, these traditional stand-alone DMARC tools linked with professional services are increasingly no longer cost effective or time sensitive to organization needs. The continuing rise of email threat volumes and increased diversification and enablement of app/cloud services for email require strong diligence in selecting a solution. DMARC should also no longer be viewed just as a one-time configuration project.

Key considerations:

  • Accuracy: What is the level of completeness for classification of IP’s from the reports of mail senders and subsequent categorization that represent the mail that belongs to my organization?
  • Insight: Is there a clear, defined workflow process in the solution? The best solutions will have easy to use, staged flows that display recommended actions and contextual guides from the data presented to explain misconfigurations in email authentication. Data needs to be actionable with insight.
  • Automation: How long will it take my organization to implement DMARC? How can I effectively maintain a DMARC enforcement policy on an ongoing basis? More recent platform solutions for DMARC use hosted management for SPF authentication which allows for expansion past the 10 SPF lookup limit and provides a far more reliable and resilient email delivery. Ongoing automated monitoring with alerting which recognizes changes in authentication, identifies new sources and takes immediate action should be requirements.
  • Value: How much should I budget and how can total cost and time resources be efficiently managed? Look for automation of defined actions and applying expertise to specifically implement those actions in the best manner for the organization. This will help limit the dependency on external professional services and result in significantly lower costs over time.

Automation is fundamental to selecting a solution that significantly lowers cost and reduces time to implementation of DMARC and ensures the more reliable approach to email handling and delivery of your organization’s email.

Anna Ward, Head of Deliverability, Postmark

select DMARC solutionA good DMARC solution should clearly identify high-risk sources, forwarders, and common email providers. It should provide actionable next steps in mitigating risk and minimize details until you actually need them. Avoid solutions that don’t show all authentication domains, differentiating between just passing SPF/DKIM and alignment.

Remember that adding a DMARC solution is essentially just adding a reporting address to your policy, so try on a few (or several at a time) if you’re curious about any provider.

How hands-on do you want to be? Will you regularly access the data via API, the app/website, email digests, etc? For sharing the data with multiple people/teams, look for secure multi-user management. Want a human guiding your progress, or do you prefer the ability to self-serve? Finally consider whether you’d point your DNS records to your DMARC provider, as some will include/exclude sending sources for you.

Pricing:

  • If you have many low-sending domains, look for tiered pricing by volume. Some are even free below a certain volume.
  • If you have a higher-volume domain, look for pricing per monitored domain. This also limits price fluctuations, especially if there’s a surge in unauthorized mail.
  • With both pricing options, check whether they include monitoring for subdomains inheriting the DMARC policy from the main domain.

A look at modern adversary behavior and the usage of open source tools in the enterprise

Leszek Miś is the founder of Defensive Security, a principal trainer and security researcher with over 15 years of experience. Next week, he’s running an amazing online training course – In & Out – Network Exfiltration and Post-Exploitation Techniques [RED Edition] at HITBSecConf 2020 Singapore, so it was the perfect time for an interview.

modern adversary behavior

What are the main characteristics of modern adversary behavior? What should enterprise security teams be on the lookout for?

This is a very open question as it depends on the attacker’s skillset and offensive experience. Modern adversaries like to behave in various ways. Don’t forget it’s also closely related to what the target is, and the attacker’s budget.

From what we are seeing in the wild, in most cases an adversary uses a combination of publicly available tools like RATs, offensive C2 frameworks powered up by a big amount of post-exploitation, and lateral movement modules, along with advanced and well-known tactics, techniques and procedures. The goal is to get initial access to the network, pivot over the systems, networks or even OS processes, escalate the privileges if needed, find out the interesting data assets, copy and hide them (sometimes in very unusual network locations), and eventually persist and exfiltrate the data by using a different set of communication channels.

Advanced attackers like to blend into network traffic of the target to become even more stealthy. Adversaries also like to make major modifications to open source tools for making detection harder. CVEs in the form of 0-day or 1-day exploits are often in use.

Big network environments are very hard to maintain and even understand – attackers are very good at that. Proved protection and detection are hard to achieve too. One single parameter or argument visible from the process list could make a significant difference

That’s the reason why companies should constantly test their environments against TTPs. The baseline profiling of your core network components, OS, devices and apps, adversary simulations, achieving full visibility and analytics across many different network data sources, correlation, and understanding of how each component affects the other one seems like a good approach for dealing with cybersecurity risks.

It’s not about if, it’s about when you will become a target. You need to be prepared. That’s the reason why at least understanding of publicly available offensive tools and techniques is crucial in the fight against attackers. We have to train, and learn new stuff every single day as attackers do. We have to test our assumptions in the field of purple teaming where two teams: the red one and the blue one work together simulating real threats and doing detection research at the same time. Without threat hunting, you are blind.

Based on what the market is saying, having a dedicated defensive/offensive training environment ready to use out-of-the-box is a good path that allows us to be prepared. We cannot, however, do much without:

  • Understanding what the real threat is
  • Solid technological base
  • Skilled teams and risk-aware management
  • Being up to date
  • Dedicated budget for training
  • Research time
  • Desire to learn.
Based on your experience, what are the most significant misconceptions when it comes to network exfiltration? What are training attendees mostly surprised about?

The most significant misconception when it comes to network exfiltration is incorrect believing that something is impossible without checking: “This box does not have direct internet access so you can’t steal data from it.” Really? That’s the power of the pivoting and the lateral movement phase. During an adversary simulation, it’s always the case.

Show me or let me simulate your scenario and I’ll understand. Training attendees are surprised mostly about two things. The first is the ease of performing certain elements of the attack and the number of possibilities. The second one is related to chained attack scenarios. Whenever you are skilled enough to combine / chain together different techniques, tools, or “exotic” communication channels – you are the winner. You have to spend lots of hours playing to understand and make a progress.

“Feeling the network” is very important. I found also as a very surprising number of possibilities in terms of using valid, normal network channels like cloud-based services for exfiltration or C2. SSH over a Google service? Data exfiltration over Dropbox? C2 over a Slack channel? Is it really possible and so easy at the same time?

What’s your take on using open source tools within an enterprise security architecture?

I have two points of view, they are related to the offensive and defensive side and both are positive. In short, I believe they should be a part of every company’s cybersecurity strategy.

From the offensive perspective, it’s amazing how many free open source tools help with the execution of adversary simulations, penetration testing services or just doing research. Open source delivers flexibility – and I am sure most of the red teamers use or create open source projects while working for large companies. It’s a great value for everyone. Recently, blue teams have started doing the same and we’re seeing some powerful knowledge out there.

From a defensive point of view, OSS is in use almost everywhere and assuming that even if a huge part of the enterprise infrastructure is based on commercial products, you will find open source components. Many commercial products would not be possible without OSS.

I am a big supporter of having critical, security areas covered by OSS. Just to name a few: Zeek IDS, Suricata IDS, Moloch, OSquery and Kolide Fleet, ModSecurity as WAF, Volatility Framework for memory analysis, auditd, iptables, LKRG for Linux kernel hardening, Graylog, Wazuh / OSSEC, (H)ELK, eBPF, theHive, MISP, Sigma rules – it is impossible to list all of them here. These are all very stable projects that can be used as supporting technology or for creating your own SOC environment from scratch. Big kudos to the open source community!

What advice would you give to those just entering the cybersecurity industry and want to work in security operations? What skills should they develop?

Based on my experience I would say that learning the basics is key, without a solid foundation you’ll never understand how things work. I would suggest learning how the network works, how Linux internals work. You should patch and compile your own Linux kernel, and play with system rootkits trying to detect them from the defensive side.

The same small step approach applies to a Windows infrastructure: AD internals, LDAP, Kerberos, GPO, DNS, etc. – all of them matter. At the same time, you could learn virtualization techniques and start doing your first programming steps to eventually get into exploitation or reversing. Making your own research lab or using ready-to-use platforms like PurpleLabs should give you a nice acceleration.

The short and simple answer does not exist, but stubbornness, discernment, enthusiasm, an open mind, hard work, and thousands of hours spent at the computer learning new stuff will eventually allow you to choose the right path in the cybersecurity world.

Data exfiltration: The art of distancing

We have all seen the carefully prepared statement. A cyber incident has occurred, we are investigating but please do not worry since no data has left our network. Perhaps we will also see the obligatory inclusion of a ‘sophisticated’ threat actor by way of explanation as to how the company protecting our data was able to be compromised.

data exfiltration

This assertion is necessary since it can be critical in the light of regulatory fines, and for some time was a claim that was often used in public admittance of ransomware incidents.

Not any more.

Since late 2019, an evolving tactic to publicly demonstrate that not only were criminals inside a company’s network, but their unfettered access allowed them the opportunity to leave with data (which is regulated) began to emerge: the threat to leak sensitive content if ransom wasn’t paid. Indeed, such was the ferocity of the claims by victims, that the tactic was perceived as a way to extort more money.

This sadly of course has proven to be very successful and has led to multiple ransomware groups building similar capabilities and leak sites. According to Coveware for example, “nearly 9% of all cases it worked on involved ransomware attackers stealing and threatening to leak data.”

This represents a significant problem with the defence that data was not accessed.

Indeed, the very concept of a ransomware attack, or even any other type of cyber incident, needs to be considered not in isolation but potentially as part of a wider campaign. For example, a recent investigation into the use of Hermes ransomware drew the conclusion that it was a vehicle to make evidence gathering more difficult rather than extort money (since the financial systems themselves were already compromised).

This concept, that we originally cited as pseudo ransomware, began to emerge circa WannaCry, but particularly with NotPetya when ransomware payments did not result in the provision of a working decryption key. This of course is a conscious intent, as opposed to bad development from the criminal.

What this emergence represents is a level of innovation designed as a vehicle to extort larger payments, but moreover the terminology we use such as a ransomware attack is no longer accurate. These are breaches (and indeed often the initial entry vector points to this), and with data exfiltration now the modus operandi for many of the more capable criminal groups we must reconsider reframing our initial assertions.

This equally will extend beyond ransomware, to the DDoS attack which may have been a smokescreen while the ultimate purpose was to extort money from victims, or indeed any variety of threats.

As we consider how the threat landscape has changed, how we address and define each attack will become more critical to articulate the importance of cybersecurity. Simply denigrating something to a technical description fails to communicate the impact such campaigns have to a wider society. For example, the use of trolls to spread false information is more likely an attempt by a capable adversary to spread misinformation to influence the democratic process. A ransomware attack may be a direct attempt to cause a shutdown within an organization, to force a company or academic institution to pay seven figure sums in order to continue operations.

Cybersecurity (or infosec) is a critical function within our society and ensuring it is articulated as such is one of our biggest challenges.

Cybersecurity software sales and training in a no-touch world

The pandemic has led to an outbreak of cybercriminal activity focused on remote workers and enterprises that needed to quickly migrate to the cloud to maintain business continuity. More than 3,100 phishing and counterfeit websites were created each day in January. By March, that figure exceeded 8,300. Communication and collaboration phishing sites also grew by 50% from January to March.

cybersecurity software sales

For enterprises caught off guard, security vulnerabilities were further exposed and the need to protect a remote workforce accelerated. With home security often not as robust as in the office, hackers target workers’ devices to gain an easier path into the enterprise, all while a security team struggles to prevent attacks with a skeleton crew.

Complicating matters, sales of complex cybersecurity software and services, and training of users and partners, have traditionally been a very high-touch, hands on process. As much as businesses are slowly reopening, decision-makers and workers alike have seen the time and cost savings of remote, so it’ll become much more common and face-to-face interactions less so.

So, how are cybersecurity software providers going to bridge those personal gaps when no-touch is the only option and will become more prominent in the future? How can staff effectively create and show demos and proofs-of-concepts (PoCs) that are engaging for the sale of complex cybersecurity solutions? How will they deliver the personal attention needed to train users on these advanced tools?

If you’re a security vendor, you’re likely asking yourself the same questions. Here’s some advice to help you find the right approach and tools for getting tangible results in a no-touch business world.

Realistic and simple

In the recent past, on-site demos and PoCs were relied upon to illustrate product value, especially in pre-sales where a hands-on trial can close a deal. For effective education, users require the same experience when training.

The pandemic drove many companies to platforms like Zoom and WebEx, which are fine for general conferencing purposes. But they are limited, and in the cybersecurity world, prospects require heightened engagement. So, always evaluate solutions with two key performance criteria in mind.

Accuracy: You need software to be shown in conditions that rival a prospect’s IT environment. The more realistic, the more decision-makers will be convinced that your solution will overcome their security pain points and introduce benefits.

Simplicity: The ability for a sales or training team to more easily conduct activities increases outreach volume, eliminates friction and accelerates cycles. Complexity eats time and money, and when it discourages external users, adoption stalls and trials are abandoned.

Be sure your teams have technology that will showcase your software realistically and simply.

Low touch, low cost?

Running cybersecurity sales or training activities on-site with dedicated hardware provides an accurate experience for evaluating software as intended, in a prospect’s environment. The problem is, not only is that not possible right now, on-site is costly and difficult to scale. There’s equipment to ship and logistics to be ironed out. A prospect’s IT department needs to grant permission and become involved in setup. Then there’s travel and related expenses.

It’s no secret that security software leaders have wanted to lower customer acquisition and training costs. A low-touch model of engagement with things like video tutorials, personalized sales content and video-enabled demos would streamline cycles. Conferencing and collaboration platforms can also provide a low-touch, simpler means via apps and cloud services.

But low touch and low cost doesn’t necessarily translate into high cost-efficiency. These approaches, again, are limited. They’re simple enough but don’t deliver an accurate, fully engaging experience. And, prospects don’t gain a clear understanding of how it will perform when it finally is deployed into their production environment.

The future is virtual

How can you show, from afar, your cybersecurity software or service in-action, solving real-world problems? How do you put the controls in users’ hands so they can know exactly what you’re talking about?

The best way forward is with virtual labs, which enables versions of actual software to be made available in a virtualized, cloud-based environment mirroring real-world scenarios. Virtual labs can deliver a true experience with your product from anywhere, so long as the user has a browser and internet connection. That also means lowering costs and reducing time in areas from equipment shipping to IT involvement to travel expenses.

Still, not all virtual IT labs are the same. Some haven’t evolved or are regarded as difficult tools used by techies. What cybersecurity software providers need is a “sales acceleration” cloud-based solution with virtual labs that can be easily used by a wider audience, including business leaders, to reduce sales cycles and produce faster results. It should be part of a purpose-built platform, with self-service capabilities, analytics, usage control and more. Other key abilities include automating processes and integration with core business tools.

Cybersecurity software providers don’t have to settle – they can be more agile and engaging, no matter where their people are located. They can increase their competitiveness and overcome no-touch challenges to close those personal gaps. It’s really an opportunity, not just to increase cost-efficiency, but to ensure your company is future-proofed to handle whatever may come next.

How data science delivers value in a post-pandemic world

With businesses from various industries tightening their belt due to pandemic-induced economic challenges, investing in data science applications and building out their teams may be taking a backseat. While the primary focus must be on preserving cash flow, what many companies don’t realize is the power evolving data science applications have on business continuity and growth during these uncertain times, and the importance of shifting data science roles in implementing effective solutions.

data science value

Applying data science to help companies achieve their business objectives during this time should no longer be considered a luxury, but a necessity to help organizations stabilize and enter a phase of strategic growth. Here are the top areas where data science is delivering value for businesses post-pandemic, and how the roles within data science teams are shifting to facilitate this.

Refine customer targeting

As consumer preferences continue to shift in ways that would have been unimaginable pre-crisis, companies can no longer go off what they have always known to be true. Whatever their preferences, customers only want to be targeted with the most accurate product recommendations and content for them. Achieving this during these economically tumultuous times requires a constant finger on the pulse of what targeted messaging will be most relevant.

For example, an e-commerce business may discover that its customers that were previously most interested in travel products are now investing in gardening tools as they gear up for a summer at home. Action: show products from gardening section. The same applies to B2B companies too: A software-as-a-service (SaaS) company might uncover insights into the different features of its product that have become more popular across certain user segments and use this data to upsell or cross sell relevant packages.

Data science powers forecasting and simulations

With the potential for a second wave of the virus still a reality, businesses can use historical data from the first wave of the pandemic to anticipate how they can best react to future events. Now, with three months of customer behavior data, you can simulate various business outcomes during the second wave.

For example, those in the consumer packaged goods (CPG) industry were hit hard by the pandemic, with big disruptions to supply chain, impacting the entire operation. Now, knowing how the middle nodes of the supply chain have the potential to break down due to quarantine and changing demands, CPG producers could seek to open up direct-to-consumer channels to reduce their dependence on wholesalers and retailers.

Here, the company could use data science to create a simulation model of working directly with consumers and integrate this into its business continuity planning going forward. AI-powered modeling can help companies not only stabilize for the near-future, but also drive them to simulate other dramatic changes such as those that may come from the climate crisis.

Data science empowers workforces

The automation of manual tasks and use of AI chatbots is not new to many teams, but with the advent of the crisis, these technologies became more valuable than ever. While teams are squeezed on time and resources, AI-powered automation allows them to channel their efforts to the business activities that require human intelligence. One example of a sector leveraging AI to keep itself afloat during this time is air travel, as airlines have been overloaded with customer service queries related to cancelled travel plans, many are using AI chatbots to provide this information to customers.

Data science also shows value within workforces by providing insights to managers on areas where employees might need more support or resources. For example, company leaders can gather data on cloud infrastructure use by certain employees and determine whether or not they need more bandwidth or access to different features.

For those businesses that understand the value of data science but don’t have the in-house expertise that is necessary to execute it, low-code and no-code development platforms are allowing them to create analytics solutions – without a data science team. These platforms include Alteryx, Google’s CloudAutoML, Amazon SageMaker and Azure AutoML. They provide an environment for developing AI or ML applications without the need for extensive programming experience.

Data science team roles are shifting

The uptick in the need for data science, across industries, comes with the need for data science teams. While hiring may have slowed down in the tech sector – Google slowed its hiring efforts during the pandemic – data scientists professionals are still in high demand. However, it’s important to keep a close eye on how these teams continue to evolve.

One position which is increasingly in-demand as businesses become more data-driven is the role of the Algorithm Translator. This person is responsible for translating business problems into data problems and, once the data answer is found, articulating this back into an actionable solution for business leaders to apply.

The Algorithm Translator must first break down the problem statement into use cases, connect these use cases with the appropriate data set, and understand any limitations on the data sources so the problem is ready to be solved with data analytics. Then, in order to translate the data answer into a business solution, the Algorithm Translator must stitch the insights from the individual use cases together to create a digestible data story that non-technical team members can put into action.

Data Engineers are also growing in importance as the amount of data that businesses routinely collect continues to grow exponentially. While data gathering is an important initial step in an organization’s data journey, the majority of this data goes into databases and stays in storage without ever being mined.

Here’s where the Data Engineer comes in. Data Engineers exist to stop this data from sitting idle and make it accessible, and hence actionable. This role is vital at the moment as companies may be missing important data insights from the last few months that are sitting unaddressed.
Data science is no longer something that selected departments of organizations within certain industries deal with, as teams from across sectors and departments realize its value during uncertain times. As industry applications grow and data science and business teams become more analogous, organizations will discover that the only true way to operate – in the good times and the bad – is by being data-driven.

How do I select a mobile security solution for my business?

The percentage of companies admitting to suffering a mobile-related compromise has grown, despite a higher percentage of organizations deciding not to sacrifice the security of mobile devices to meet business targets.

To make things worse, the C-suite is the most likely group within an organization to ask for relaxed mobile security protocols – despite also being highly targeted by cyberattacks.

In order to select a suitable mobile security solution for your business, you need to consider a lot of factors. We’ve talked to several industry professionals to get their insight on the topic.

Liviu Arsene, Global Cybersecurity Analyst, Bitdefender

select mobile security solutionA business mobile security solution needs to have a clear set of minimum abilities or features for securing devices and the information stored on them, and for enabling IT and security teams to remotely manage them easily.

For example, a mobile security solution for business needs to have excellent malware detection capabilities, as revealed by third-party independent testing organizations, with very few false positives, a high detection rate, and minimum performance impact on the device. It needs to allow IT and security teams to remotely manage the device by enabling policies such as device encryption, remote wipe, application whitelisting/blacklisting, and online content control.

These are key aspects for a business mobile security solution as it both allows employees to stay safe from online and physical threats, and enables IT and security teams to better control, manage, and secure devices remotely in order to minimize any risk associated with a compromised device. The mobile security solution should also be platform agnostic, easily deployable on any mobile OS, centrally managed, and allow users to switch from profiles covering connectivity and encryption (VPN) settings based on the services the user needs.

Fennel Aurora, Security Adviser at F-Secure

select mobile security solutionMaking any choice of this kind starts from asking the right questions. What is your company’s threat model? What are your IT and security management capabilities? What do you already know today about your existing IT, shadow IT, and employees bring-your-own-devices?

If you are currently doing nothing and have little IT resources internally, you will not have the same requirements as a global corporation with whole departments handling this. As a farming supplies company, you will not face the same threats, and so have the same requirements, as an aeronautics company working on defense contracts.

In reality, even the biggest companies do not systematically do all of the 3 most basic steps. Firstly, you need to inventory your devices and IT, and be sure that the inventory is complete and up-to-date as you can’t protect what you don’t know about. You also need at minimum to protect your employees’ devices against basic phishing attacks, which means using some kind of AV with browsing protection. You need to be able to deploy and update this easily via a central tool. A good mobile AV product will also protect your devices against ransomware and banking trojans via behavioral detection.

Finally, you need to help people use better passwords, which means helping them install and start using a password manager on all their devices. It also means helping them get started with multi-factor authentication.

Jon Clay, Director of Global Threat Communications, Trend Micro

select mobile security solutionMany businesses secure their PC’s and servers from malicious code and cyber attacks as they know these devices are predominately what malicious actors will target. However, we are increasingly seeing threat actors target mobile devices, whether to install ransomware for quick profit, or to steal sensitive data to sell in the underground markets. This means is that organizations can no longer choose to forego including security on mobile devices – but there are a few challenges:

  • Most mobile devices are owned by the employee
  • Most of the data on the mobile device is likely to be personal to the owner
  • There are many different device manufacturers and, as such, difficulties in maintaining support
  • Employees access corporate data on their personal devices regularly

Here are a few key things that organizations should consider when looking to select a mobile security solution:

  • Lost devices are one reason for lost data. Requiring users to encrypt their phones using a passcode or biometric option will help mitigate this risk.
  • Malicious actors are looking for vulnerabilities in mobile devices to exploit, making regular update installs for OS and applications extremely important.
  • Installing a security application can help with overall security of the device and protect against malicious attacks, including malicious apps that might already be installed on the device.
  • Consider using some type of remote management to help monitor policy violations. Alerts can also help organizations track activities and attacks.

Discuss these items with your prospective vendors to ensure they can provide coverage and protection for your employee’s devices. Check their research output to see if they understand and regularly identify new tactics and threats used by malicious actors in the mobile space. Ensure their offering can cover the tips listed above and if they can help you with more than just mobile.

Jake Moore, Cybersecurity Specialist, ESET

select mobile security solutionCompanies need to understand that their data is effectively insecure when their devices are not properly managed. Employees will tend to use their company-supplied devices in personal time and vice versa.

This unintentionally compromises private corporate data, due to activities like storing documents in unsecure locations on their personal devices or online storage. Moreover, unmanaged functions like voice recognition also contribute to organizational risk by letting someone bypass the lock screen to send emails or access sensitive information – and many mobile security solutions are not fool proof. People will always find workarounds, which for many is the most significant problem.

In oder to select the best mobile security solution for your business you need to find a happy balance between security and speed of business. These two issues rarely go hand in hand.

As a security professional, I want protection and security to be at the forefront of everyone’s mind, with dedicated focus to managing it securely. As a manager, I would want the functionality of the solution to be the most effective when it comes to analyzing data. However, as a user, most people favor ease of use and convenience at the detriment of other more important factors.

Both users and security staff need to be cognizant of the fact that they’re operating in the same space and must work together to strike the same balance. It’s a shared responsibility but, importantly, companies need to decide how much risk they are willing to accept.

Anand Ramanathan, VP of Product Management, McAfee

select mobile security solutionThe permanent impact of COVID-19 has heightened attacker focus on work-from-home exploits while increasing the need for remote access. Security professionals have less visibility and control over WFH environments where employees are accessing corporate applications and data, so any evaluation of mobile security should be based on several fundamental criteria:

  • “In the wild security”: You don’t know if or how mobile devices are connecting to a network at any given time, so it’s important that the protection is on-device and not dependent on a connection to determine threats, vulnerabilities or attacks.
  • Comprehensive security: Malicious applications are a single vector of attack. Mobile security should also protect against phishing, network-based attacks and device vulnerabilities. Security should protect the device against known and unknown threats.
  • Integrated privacy protection: Given the nature of remote access from home environments, you should have the ability to protect privacy without sending any data off the device.
  • Low operational overhead: Security professionals have enough to do in response to new demands of supporting business in a COVID world. They shouldn’t be obligated to manage mobile devices differently than other types of endpoint devices and they shouldn’t need a separate management console to do so.

How do I select a security awareness solution for my business?

“Great security awareness training, that is part of a healthy cyber security culture and that is aimed at encouraging positive security behaviours, is essential. The problem is that awareness-raising training has a history of being dry, dull, technically-focused and ineffective,” Dr. Jessica Barker, Co-CEO of Cygenta, told us in a recent interview.

In order to select the right security awareness solution for your business, you need to think about a number of factors. We’ve talked to several industry professionals to get their insight on the topic.

David Lannin, CTO, Sapphire

select security awareness solutionEngaging positively with your audience is critical in the success of any security awareness solution. Every individual is different, each having their preferences of learning style, content and pace. The solution you consider should be able to adapt to this, having rich and varied content suited to the right users and groups across your business.

Do not lose sight at how diverse an audience can be and where their areas of expertise lie. Educating a purchasing team on handling financial information online is appropriate, but a generic warning about password usage may be less useful to the security teams.

Test your employee’s awareness and measure their improvement. This provides a full HR/audit trail, and publishing these results over time keeps staff engaged, showing changes in how effective security awareness training has been. Identifying individuals that are more phish-prone helps focus targeted training for those individuals – a weak link in your cyber defenses. Tailored training based on understanding ensures that those who demonstrate an understanding earlier in the process can be exempt from further training.

Ensure that the results are tangible. Be able to demonstrate the security awareness solution is effective and improving the overall security posture of the business.

Lise Lapointe, CEO, Terranova Security

select security awareness solutionThe right security awareness training solution will drive long-term behavioral change among employees to create a cultural of security awareness.

There are five key components that must be in place to accomplish this:

  • High quality content: Security training cannot effectively be approached as a “one-size-fits-all”. Different format and length in content promotes better participation and retention rates.
  • Intuitive phishing simulator: Out-of-the-box phishing scenarios that reflect real-life cyber threats integrated with training for feedback.
  • Multilingual content and platform: Out-of-the-box language support for global security awareness programs.
  • Communication and reinforcement materials: Large libraries of predesigned content and templates for internal campaign promotion and content reinforcement including videos, posters and newsletters.
  • Consultative approach: Security training that this is tied to the businesses needs with offerings including: CISO coaching, managed services and content customization.

By choosing the right security awareness training solution, businesses can develop customized, multi-language campaigns that are engaging and informative – and most importantly, successful.

Michael Madon, SVP & GM Security Awareness and Threat Intelligence Products, Mimecast

select mobile security solutionHuman error poses one of the biggest risks to any organization. Yet, many organizations are conducting cyber awareness training quarterly or even less frequent – which is simply not enough. Mimecast recently surveyed 1,025 IT decision makers and found that 21% of respondents offer training on a monthly basis – a timeframe experts consider the gold standard.

The goal of any security awareness program should be to change employee’s perception of cybersecurity – helping them understand that it is not an inconvenience, but something that can help them be more effective in their jobs. But, effectively educating employees on email and web security cannot be achieved through one-off training sessions or siloed events that involve non-interactive materials like sterile corporate videos and mass-produced pamphlets.

When identifying a security awareness solution, organizations should look for the following:

  • Humor – Not many people absorb information when it’s given in a format that is stale and boring. Humor captures people’s attention and is the best way to engage. Look for a solution that includes humor to communicate important information in a highly relatable way.
  • Short and frequent content – Offering a regular cadence of concise trainings is a great way to ingrain cybersecurity best practices into employees’ day-to-day activities. Training sessions should be delivered monthly and be only 5 minutes or less.
  • Risk scoring – Risk scoring capabilities can help identify employees who are most at risk for attack and can help focus increased time and resources on specific individuals.

Lance Spitzner, Certified Instructor, SANS Institute

select security awareness solutionSecurity awareness is ultimately a control to help ensure your organization is not only compliant, but you are effectively managing and measuring your human risk. As such, you need a solution that was developed by experts who understand risk and know both what risks and which behaviors to focus on.

These decisions should be driven by data based on today’s latest threats, technologies and incident drivers. If you are focusing on the wrong behaviors, not only are you wasting your organizations time but could be actually increasing the risk to your organization, such as requiring people to regularly change their passwords.

Other key factors include how often the content is updated and how people will relate to it. As technology, threats and organizations change so do risks. Your training should reflect that change. The other element is ensuring the training is a good fit for your organization and your culture. For example, if you have an outgoing organization that loves humor, then use humorous training. But if you have a large, diverse or more conservative organization, you will want training that adapts well to that environment.

Inge Wetzer, Social Psychologist Cybersecurity & Compliance, Secura

select security awareness solutionFirst of all; go one step back! Ask yourself the question: what exactly do you want to achieve? Looking for an awareness solution implies that your goal would be that all your employees are aware of the security risks and that they know what they should do. Your focus is: knowledge. However, a gap exists between knowing what you should do and actual behavior. Many people are aware that they should actually lock their computer screens, but many people still don’t behave accordingly.

Would you be happy if all employees in your organization pass an awareness test? What does this tell you about their actual behavior? So, you may not be looking for a security awareness solution, but for a security behavior solution?

Psychology teaches us that behavior is defined by more than knowledge: our actions are also driven by personal factors such as our motivation and past experience. In addition, organizational factors such as context and culture also define behavior. For effective behavioral change, all aspects of behavior should be addressed. Moreover, the attention to these factors should be recurrent to keep the topic top of mind. So, look for a continuous program that focuses on safe behavior as end goal by paying attention to its three determinants: knowledge, personal factors and organizational factors.

A look inside privacy enhancing technologies

There is a growing global recognition of the value of data and the importance of prioritizing data privacy and security as critical cornerstones of business operations. While many events and developments could be viewed as contributing to this trend, it would be difficult to argue that the increased discussion generated by today’s accelerating regulatory environment has not played a significant role.

Consumers are more aware of what constitutes personal data, who they do (or do not) trust to manage it, and are demanding a data-centric approach to address privacy and security. A survey conducted by Pew Research Center last year found that 79% of adults were concerned about how companies were using the data collected about them, and 52% said they had opted not to use a product or service because they were worried about the personal information that would be collected.

Privacy enhancing technologies

Businesses are looking for ways to alleviate these concerns, not only in their interactions directly with consumers, but also in a B2B context. This has led to a resurgence of interest, advances, and commercialization in the area of privacy enhancing technologies, or PETs, a powerful category of technologies that enable, enhance, and preserve data privacy throughout its lifecycle. By adopting a data-centric approach to privacy and security, these technologies help ensure that sensitive data remains protected during processing (“data in use”).

PETs is an umbrella term describing technologies that secure data in use and are essential to preserving and enhancing privacy and security while searches or analytics are performed. These technologies include homomorphic encryption, private set intersection, secure multiparty computation, differential privacy, and trusted execution environments – and many of them have intersection points and/or can be used in conjunction with one another, depending on the use case.

privacy enhancing technologies

While there is some nuance depending on application and use case, in general, the more secure the technology, the more privacy enhancing or privacy-preserving capabilities it provides. Of the technologies mentioned previously, homomorphic encryption offers the strongest security positioning (and hence is most privacy-enhancing). Trusted execution environments (TEE) offer the weakest (and hence are the least privacy-preserving). While it may be tempting to default to the highest level of security in all circumstances, it is important to understand each technology in order to determine which is the right choice for a given use case.

Homomorphic encryption

Homomorphic encryption is the most secure option. Widely considered to be the “holy grail” of encryption, it allows for computation in the encrypted or ciphertext space. Homomorphic encryption is not a new technology — it has been studied in the academic space for more than 30 years. And while it has been historically computationally intensive, recent breakthroughs now make it practical for a wide range of commercial applications.

At its core, homomorphic encryption provides two primitive operations in the ciphertext/encrypted space: 1) the ability to multiply two homomorphically encrypted values together and/or 2) the ability to add two homomorphically encrypted values together such that when you decrypt the product or sum you get a meaningful value. This is the only type of encryption in which you can combine encrypted values and get a meaningful result.

The two basic types of homomorphic encryption correspond to the ability to deliver these two primitives. Fully homomorphic encryption gives both multiplication and addition in ciphertext space and partially homomorphic encryption gives multiplication in ciphertext space. Both types are powerful and, as with all things in computing, those simple operations can be built together into algorithms that enable core business functionalities—including encrypted searches and encrypted analytics such as machine learning/AI.

It can be leveraged in techniques such as private set intersection to securely compute the overlapping items from two sets of data. It is important to note that implementations of homomorphic encryption algorithms and software products leveraging it to provide business capabilities are not hardware bound — although they may optionally leverage special types of hardware (GPUs, FPGAs) to accelerate some of the mathematical computations.

Secure multiparty computation

The secure multiparty computation (SMPC or MPC) family of techniques allow multiple parties to jointly operate on data while keeping their individual inputs private. Like homomorphic encryption, the technology is roughly 30 years old and has been an active area of research in academic circles since the mid-1980s. Breakthroughs in academia as well as solutioning by a number of technology providers in the commercial space have matured SMPC to the point of practical applicability in certain use cases.

Ultimately, the security and hence privacy of SMPC varies widely and depends on which type is used. For example, some implementations of SMPC leverage homomorphic encryption and thus can offer strong security guarantees.

Differential privacy

In differential privacy, randomly generated noise is added to the underlying data for obfuscation purposes and, as a result, any computations performed on the altered data are only statistically/directionally correct (i.e., not accurate). Thus, the use cases are narrower for DP than for other PETs since accurate results are not guaranteed and the possible computations are limited.

Trusted execution environments (TEE)

The least secure of the PETs is TEE, sometimes also referred to as secure enclave technologies. The security of TEEs is essentially a perimeter-based security model — in TEEs, the perimeter is very small and exists on the hardware chip itself instead of at a network boundary. As with any perimeter security model, if you can break through the perimeter, you can gain access to all data within. As everything is decrypted within the perimeter of the on-chip enclave, TEEs enable very fast computational abilities, at a trade-off of a weakened security and privacy posture. This may be suitable for some use cases that have more relaxed security and privacy constraints (i.e., that don’t require nation-state level security or regulatory privacy adherence).

The best-known commercial offering in the TEE space is the Intel SGX. Since the discovery of the Spectre and Meltdown vulnerabilities a few years ago, the space has unfortunately been rife with security issues that seem to be unearthed on a continual basis. As TEEs are hardware-bound, applications leveraging them for securing data in use will be as well. There are API abstraction layers that are being developed to aid in application portability between different hardware TEEs.

Conclusion

The desire for privacy is not a passing trend. Whether led by government regulation or consumer demand, organizations must be ready to operate in a world that prioritizes data security and privacy.

Privacy enhancing technologies deliver technical solutions to this challenge. With broad and increasingly prevalent applications for PETs in the commercial space, a growing number of businesses will want to take advantage of these business-enabling capabilities. When doing so, it’s imperative to realize that not all PETs are created equal. One must first identify the privacy-centered business challenge to be solved and then select the PET that’s best positioned to address it.

5 keys to protecting OneDrive users

With the dramatic shift toward remote workforces over the last three months, many organizations are relying more heavily on cloud tools and application suites. One of the most popular is Microsoft’s OneDrive.

OneDrive security

While OneDrive may seem like a secure cloud storage solution for companies looking to use Microsoft’s suite of business tools, many glaring security issues can expose sensitive data and personally identifiable information (PII) if proper protection protocols are ignored. Data theft, data loss, ransomware, and compliance violations are just a few things that organizations need to watch for as their employees increasingly rely on this application to save more and more documents to the cloud.

While OneDrive does provide cloud storage, it doesn’t have cloud backup functionality, a critical distinction that must be made when choosing which information to upload and share. The data is accessible, but not protected. How can businesses ensure they’re mitigating security risks, while also enabling employee access? Below we’ll discuss some of the most significant security gaps associated with OneDrive and highlight the steps organizations can take to better protect their data.

Document visibility

One area that often breeds confusion for OneDrive users is who can access company files once they’re uploaded in the cloud. For employees saving documents on their personal accounts, all the files created or added outside of a “Shared with Me” folder are private until the user decides otherwise. At this point, files are encrypted for anyone but the creator and Microsoft personnel with administrative rights. For someone else to see your data, you have to share the folder or a separate file.

The same rule holds for files shared on a OneDrive for Business account, with one exception: a policy set by an administrator determines the visibility of the data you create in the “Shared” folder.

Are sensitive documents safe in OneDrive?

For purposes of this article, sensitive documents refer to materials that contain either personally identifiable information (PII), personal health information (PHI), financial information, or data covered under FISMA and GLBA compliance requirements. As we established above, these types of documents can be saved one of two ways – by an individual under a personal OneDrive account or uploaded under a Business account. Even if your business does not subscribe to a OneDrive business account, organizations should be aware that employees may be emailing themselves documents or sharing them to their personal OneDrive folders for easy access, especially over the past several months with most employees working from home.

For personal users, OneDrive has a feature called Personal Vault (PV). How secure is the OneDrive Personal Vault? It is a safe located in your Files folder explicitly designed for sensitive information.

When using PV, your files are encrypted until your identity is verified. It has several different verification methods that users can set up, whether it’s a fingerprint, a face ID, or a one-time code sent via email or SMS. The PV folder also has an idle-time screensaver that locks if you are inactive for 3 minutes on the mobile app, and 20 minutes on the web. To regain access, you need to verify yourself again.

Interestingly, the PV function isn’t available in the OneDrive for Business package. Therefore, if your organization has no other way to store sensitive data than on OneDrive, additional security measures must be taken.

OneDrive is not a backup solution

OneDrive is not a backup tool. OneDrive provides cloud storage, and there is a massive difference between cloud backup and cloud storage. They have a few things in common, like storing your files on remote hardware. But it’s not enough to make them interchangeable.

In short, cloud storage is a place in the cloud where you upload (manually or automatically) and keep all your files. Cloud storage allows you to reach files from any device at any time, making it an attractive option for workers on the go and those that work from different locations. It also allows you to manually restore files from storage in case of unwanted deletion and scale storage for your needs. While “restoring files” sounds eerily similar to backup protection, it has some fundamental faults. For example, if you accidentally delete a file in storage, or it was hit by ransomware and encrypted, you can consider the file lost. This makes OneDrive storage alone a weak solution for businesses. If disaster strikes and information is compromised, the organization will have no way to restore high volumes of data.

Cloud backup, on the other hand, is a service that uses cloud storage to save files, but its functionality doesn’t end there. Cloud backup services automatically copy your data to the storage area and restore your data relatively quickly after a disaster. You can also restore multiple versions of a backed-up file, search for specific files, and it protects data from most of the widespread threats, including accidental deletion, brute-force attacks, and ransomware.

In summary: cloud storage provides access, cloud backup provides protection.

What are the most common OneDrive risks?

All the security issues tied with using OneDrive are common for most cloud storage services. Both individual OneDrive and OneDrive for Business have multiple risks, including data theft, data loss, corrupted data, and the inadvertent sharing of critical information. Given the ease of access to documents in OneDrive, compliance violations are also a top concern for organizations that deal with sensitive data.

How can you maximize OneDrive security?

To minimize the above security issues, organizations need to follow a set of strict protocols, including:

1. Device security protocols – Several general security protocols should be implemented with devices using OneDrive. Some of the most basic include mandatory downloading of antivirus software and ensuring it is current on all employee devices. Other steps include using a firewall, which will block all questionable inbound traffic, and activating idle-time screensaver passwords. As employees return from remote work locations and bring their devices back on-premise, it’s crucial to ensure all devices have updated security and meet the latest compliance requirements.

2. Network security protocols – In addition to using protected devices, employees should be especially cautious when connecting to any unsecured networks. Before connecting to a hotspot, instruct employees to make sure the connection is encrypted and never open OneDrive if the link is unfamiliar. Turning off the functionality that allows your computer to connect to in-range networks automatically is one easy way to add a layer of protection.

3. Protocols for secure sharing – Make sure to terminate OneDrive for Business access for any users who are no longer with the company. Having an employee offboarding process that includes this step lessens the risk of a former employee stealing documents or information. Make sure to allow access to only invited viewers on OneDrive. If you share a file or folder with “Everyone” or enable access with the link, it opens up new risks as anyone on the internet can find and access your document. It’s also helpful to have outlined rules for downloading and sharing documents inside, and outside, the corporation.

4. Secure sensitive data – Avoid storing any payment data in any Office 365 products. For other confidential documents, individual users can use PV. Organizations can store sensitive data only by using a secure on-premises or encrypted third-party cloud backup service that is compliant with data regulations mandatory for your organization.

5. Use a cloud backup solution – To best protect your company from all sides, it’s essential to use a cloud backup solution when saving valuable information to OneDrive. Make sure any backup solution you choose has cloud-to-cloud capabilities with automatic daily backup. In addition, a ransomware protection service that scans OneDrive and other Office 365 services for ransomware and automatically blocks attacks is your best defense against costly takeovers.

Whether it’s preparing for upcoming mandatory regulations or dealing with the sudden management of employees working offsite, the security landscape is ever-changing. Keeping up with the latest methods to keep your company both protected and compliant is a challenge that needs constant attention. With a few critical steps and the utilization of new technology, business users can protect themselves and lessen the risk to their data.

Why traditional network perimeter security no longer protects

Greek philosopher Heraclitus said that the only constant in life is change. This philosophy holds true for securing enterprise network resources. Network security has been and is constantly evolving, often spurred by watershed events such as the 2017 NotPetya ransomware attack that crashed thousands of computers across the globe with a single piece of code. These events prompt changes in network architectures and the philosophies that underlie them.

The internet initially lacked security because there were bigger problems to solve at the time of its creation. Internet pioneer Dan Lynch remembers that time because he led the ARPANET team that made the transition from the original NCP protocols to the current TCP/IP-based protocols. “When we were first starting to test the first internet, we looked at security and thought that it would be too difficult to include at this phase because we were just trying to get it to work at all,” he said. “Once we got it working, we could add security then. Bad choice, eh? We never looked back until it was too late.”

For decades, network security philosophy focused on securing the inside from threat actors on the outside, which was the same philosophy the Romans relied on to protect their frontier. Defining perimeters made sense in the early days of network security and aligned with the basic principle of defense-in-depth — protect internal resources from external forces. It worked because employees were office-bound, and the office walls defined the perimeter that protected the resources they were trusted to access.

Step outside, and employees became intruders if they tried to access those very same resources. While traditional perimeter security was clunky, by and large it worked, despite chokepoints that became flypaper for middleware appliances, which used largely static security policies.

But security best practices and go-to devices eventually fall out of favor or become obsolete, as next-generation practices and technologies rise to replace them — until a pivotal crisis occurs. In these times, the driver for change has been a non-digital virus: COVID-19.

The new VPN workplace

The global pandemic has forced a seismic shift in how and where work gets done, and for now it’s unclear when workers will be able to return to the office. According to a recent Gartner survey, 317 CFOs and finance leaders don’t think that it will be anytime soon. 74 percent also expect teleworking to outlive the pandemic and plan to move at least 5 percent of their previously on-site workforce to permanently remote positions after the pandemic ends.

For decades, organizations have relied on VPNs to provide employees the ability to perform their jobs securely while out of the office, but VPN budgets have generally supported about one-third of workers using VPN services at any one time.

In mid-March, VPN providers reported that traffic soared over 40 percent worldwide, peaking at 65 percent in the United States, days before the signing of the $2 trillion stimulus package. Some enterprises conducted stress tests on their networks (i.e., bandwidth capacity, VPN stability) before allowing the majority of their employees to work from home. Others scrambled to implement VPNs or buy more licenses. In a study conducted by OpenVPN, 68 percent of employees from 300 different U.S. companies claimed that their company expanded VPN usage in response to Covid-19, and 29 percent of employees became first-time users.

While VPNs are relatively quick and less expensive to implement than a network architecture reboot, VPNs are not a panacea. The encrypted VPN communications and data tunnel still adhere to the basic premise that there is a protected perimeter a remote user needs to tunnel through to gain local access privileges to enterprise resources. VPNs also don’t prevent lateral movement or eliminate insider threats.

CISOs worry that IT personnel might cut corners when implementing VPNs, ignoring crucial security policies. They also worry about security analysts becoming fatigued by an increasing number of alerts, many of them false positives. Like the harp that woke up the sleeping giant in Jack and the Beanstalk, the sharp rise in VPN traffic has roused advanced persistent threat (APT) groups to curate new payloads and exploit existing vulnerabilities.

A UK security bulletin issued in January, for example, alerted companies to hackers exploiting a critical vulnerability in the Citrix Application Delivery Controller (ADC) and Citrix Gateway. Researchers also found a rise in scans looking for vulnerable Citrix devices. CISA issued an alert in March that encouraged enterprises to adopt a heightened awareness about VPN vulnerabilities and recommended multi-factor authentication and alerting employees of phishing scams that steal VPN credentials.

The re-emergence of zero trust

Reality demands that enterprises rethink perimeter security because employees and their laptops and smartphones and other devices are now literally all over the place, shifting the network perimeter to wherever a user is located. The network security paradigm that is designed to meet the dynamics of a mobile workforce is a perimeter-less network, or zero trust architecture (ZTA). At a high level, ZTA is less about network topology and physical location and more about strategy and guiding principles. The underlying philosophy is to replace the assumption of trust with the assumption of mistrust: everyone can be a threat and the network is always under attack.

To prevent or limit breaches, implied trust shrinks down to the level of data — not users, enterprise devices (assets) and infrastructure (though ZT tenets can be used to protect all enterprise assets). The “trust but verify” proverb that is synonymous with perimeter security becomes “never trust, verify and trust, then re-verify and keep re-verifying until zero trust is achieved”.

ZTA seems like a logical progression from perimeter security, just as smartphones became a logical progression of the landline. As is true with the adoption of any new technology, the story is as much about components and peripherals as it is about the psychosocial constructs behind the design principles. To psychoanalyze ZTA is to understand the root of trust. To trust is human and develops at infancy, so when humans first designed network security, it made sense that they would draw on relationships of trust to create a perimeter that created a big zone where everyone and everything were trusted and had access to each other. Beating cybercrime and working in an interconnected world, however, calls for a paradigm of mistrust. ZTA characterizes mistrust as a positive quality that makes computer sense in the global landscape of machine learning.

The notion of zero trust has undulated within the security community since the Jericho Forum published its vision on the topic in 2005. After more than 2,500 cyberattacks hit NATO in 2012, the U.S. federal government urged federal agencies to adopt the zero-trust model. In 2015, the government sounded the alarm again after the largest data breach of federal employee data.

Who listened? Enterprises seeking more flexible solutions than VPNs or more precise access and session control for on-premises and cloud applications.

Before the pandemic, interest in ZTA was piqued. It has now gained fresh momentum, especially since the technology to support it is becoming mainstream. The PulseSecure 2020 Zero Trust Progress Report found that, by the end of the year, almost 75 percent of enterprises plan to implement ZTA, but nearly half of security professionals said they lacked the expertise and confidence to implement it.

Guidance to help enterprises transition and implement ZTA is coming from the private and public sectors. Startups (i.e., Breach View, Obsidian Security, HyperCube) are capitalizing on the trend to offer zero-trust-related services. On the public front, NIST published in February the second draft of special publication 800-207, Zero Trust Architecture. The following month, the National Cybersecurity Center of Excellence, which is part of NIST, mapped ZTA to the NIST Cybersecurity Framework and offered implementation approaches. Despite the guidance, ZTA is unlikely to find full-scale adoption because the principles of perimeter security may still be relevant for some enterprises.

OPIS

Figure 1. ZTA High-level Architecture. Adapted from NIST (2020). Special Publication 800-207, Zero Trust Architecture

How it works

Identity and asset management, application authentication, network segmentation and threat intelligence are the main components and capabilities ZTA relies on. Figure 1 shows the core architecture — the policy engine and policy administrator, which collectively create the policy enforcement point. The policy engine runs the security policies, which leverage behavioral analytics to make them dynamic, and the policy administrator executes the decisions made by the policy engine to either grant, deny or revoke a request to access data. With ZTA, no packet is trusted without cryptographic signatures, and policy is constructed using software and user identity rather than IP addresses.

Another way to express the relationship between the policy engine and administrator is that a user communicates information (i.e., time/date, geolocation and device posture) to the policy engine, which calculates a risk score and communicates risk (i.e., the decision) to the policy administrator on how to handle the request. The decision made by the policy engine is described as information-trustworthiness.

To implement ZTA, a “protect surface” is identified. The protect surface is composed of a network’s most critical and valuable data, assets, applications and services, or DAAS for short. Single-point barriers (i.e., micro-segmentation) are erected around trust zones for each piece of data. The trust zones create multiple junctions and inspection points to block unauthorized access and lateral movement. Think of the zones as airline boarding areas — only cleared passengers with a boarding pass are granted access to the desired resource (i.e., airplane). Similarly, ZT security policies authenticate and authorize users as they get closer to a requested DAAS resource.

Breaches

ZTA has its shortcomings. Although it’s designed to limit and prevent breaches, NIST says in its draft ZTA publication that it is not immune to them. Insider threats loom in ZTA as they do with perimeter security. Any enterprise administrator with configuration access to the policy engine or administrator might change the security rules. To mitigate the risk, configuration changes must be logged and subject to audit.

ZTAs are also prone to denial-of-service (DoS) attacks or route hijacks if a hacker disrupted access to the policy enforcement point (PEP). PEP in the cloud or replicating it across several locations mitigates the risk, but if a cloud provider accidentally took the PEP offline or if botnets hit the cloud provider, the experience would be the same — a disruption of service.

But the biggest threat is the one that remains a leading concern for every organization, and that is phishing scams. Verizon’s 2019 Data Breach Investigations Report showed that phishing continues to be the most popular approach for gaining access to systems (followed by stolen credentials). U.S. organizations were the No. 1 phishing target, accounting for 84 percent of total phishing volume, according to a 2019 PhishLabs report.

But the most ghastly statistic is the 667 percent spike in the number of Covid-19-related spear phishing attacks since the end of February. Despite security and awareness training and compensating controls, efforts to patch the last line of defense — users — remains a challenge and is likely to remain that way because the most popular reason to explain the behavior is also the oldest one. Put simply, to err is human.

Understanding cyber threats to APIs

This is the fourth of a series of articles that introduces and explains API security threats, challenges, and solutions for participants in software development, operations, and protection.

API security threats

Security issues for APIs

The many benefits that APIs bring to the software and application development communities – namely, that they are well documented, publicly available, standard, ubiquitous, efficient, and easy to use – are now being leveraged by bad actors to execute high profile attacks against public-facing applications. For example, we know that developers can use APIs to connect resources like web registration forms to many different backend systems. The resultant flexibility for tasks like backend update also provide support for automated attacks.

The security conundrum for APIs is that whereas most practitioners would recommend design decisions that make resources more hidden and less available, successful deployment of APIs demands willingness to focus on making resources open and available. This helps explain the attention on this aspect of modern computing, and why it is so important for security teams to identify good risk mitigation strategies for API usage.

API security threats

Security threats to APIs

OWASP risks to APIs

In addition to its focus on risks to general software applications, OWASP has also provided useful guidance for API developers to reduce security risk in their implementations. Given the prominence of the OWASP organization in the software community, it is worth reviewing the 2019 Top 10 API Security Risks (with wording taken from the OWASP website):

1. Broken Object Level Authorization. APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface level access control issue. Object level authorization checks should be considered in every function that accesses a data source using an input from the user.

2. Broken User Authentication. Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or to exploit implementation flaws to assume other user’s identities temporarily or permanently. Compromising a system’s ability to identify the client/user compromises API security overall.

3. Excessive Data Exposure. Looking forward to generic implementations, developers tend to expose all object properties without considering their individual sensitivity, relying on clients to perform the data filtering before displaying it to the user.

4. Lack of Resources & Rate Limiting. Quite often, APIs do not impose any restrictions on the size or number of resources that can be requested by the client/user. Not only can this impact the API server performance, leading to Denial of Service (DoS), but also leaves the door open to authentication flaws such as brute force.

5. Broken Function Level Authorization. Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions, tend to lead to authorization flaws. By exploiting these issues, attackers gain access to other users’ resources and/or administrative functions.

6. Mass Assignment. Binding client provided data (e.g., JSON) to data models, without proper properties filtering based on a whitelist, usually lead to mass assignment. Either guessing objects properties, exploring other API endpoints, reading the documentation, or providing additional object properties in request payloads, allows attackers to modify object properties they are not supposed to.

7. Security Misconfiguration. Security misconfiguration is commonly a result of unsecure default configurations, incomplete or ad-hoc configurations, open cloud storage, misconfigured HTTP headers, unnecessary HTTP methods, permissive Cross-Origin resource sharing (CORS), and verbose error messages containing sensitive information.

8. Injection. Injection flaws, such as SQL, NoSQL, command injection, etc., occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s malicious data can trick the interpreter into executing unintended commands or accessing data without proper authorization.

9. Improper Assets Management. APIs tend to expose more endpoints than traditional web applications, making proper and updated documentation highly important. Proper hosts and deployed API versions inventory also play an important role to mitigate issues such as deprecated API versions and exposed debug endpoints.

10. Insufficient Logging & Monitoring. Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, extract, or destroy data. Most breach studies demonstrate the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.

API security requirements

As exemplified by the OWASP list, the cyber security community is beginning to identify many familiar, canonical issues that emerge in the use of APIs for public-facing applications. Below are five generalized cyber security requirements for APIs that come up in design and development context frequently for both legacy and new Internet applications:

Visibility

The adage that knowledge-is-power seems appropriate when it comes to API visibility. Application developers and users need to know which APIs are being published, how and when they are updated, who is accessing them, and how are they being accessed. Understanding the scope of one’s API usage is the first step toward securing them.

Access control

API access is often loosely-controlled, which can lead to undesired exposure. Ensuring that the correct set of users has appropriate access permissions for each API is a critical security requirement that must be coordinated with enterprise identity and access management (IAM) systems.

Bot mitigation

In some environments, as much as 90% of the respective application traffic (e.g., account login or registration, shopping cart checkout) is generated by automated bots. Understanding and managing traffic profiles, including differentiating good bots from bad ones, is necessary to prevent automated attacks without blocking legitimate traffic. Effective complementary measures include implementing whitelist, blacklist, and rate-limiting policies, as well as geo-fencing specific to use-cases and corresponding API endpoints.

Vulnerability exploit prevention

APIs simplify attack processes by eliminating the web form or the mobile app, thus allowing a bad actor to more easily exploit a targeted vulnerability. Protecting API endpoints from business logic abuse and other vulnerability exploits is thus a key API security mitigation requirement.

Data loss prevention

Preventing data loss over exposed APIs for appropriately privileged users or otherwise, either due to programming errors or security control gaps, is also a critical security requirement. Many API attacks are designed specifically to gain access to critical data made available from back-end servers and systems.

The API community continues to drive toward more standardized agreement on the optimal approach to security. To this end, industry groups such as OAUTH, for example, have proposed criteria requirements for API security that are quite useful. The most likely progression is that the software security community will continue to refine its understanding and insight into the full range of API security requirements in the coming years. Observers should thus expect to see continued evolution in this area.

API security threats

API security methods

API abuse in action

By design, APIs are stateless, assuming that the initial request and response are self-contained, holding all the information needed to complete the transaction. Making program calls to an API directly, or as part of a mobile or web application improves user experience and overall performance. This makes it very easy for a bad actor to script and automate their attack as highlighted in two examples below

Account takeover and romance fraud: Zoosk is a well-known dating application. Bad actors decompiled the Zoosk app to uncover account login APIs. Using automation and attack toolkits, they then executed account takeover attacks. In some cases, compromised accounts were used to establish a personal relationship with another Zoosk user and, as the relationship blossomed, the bad actor requested money due to a sudden death or illness in the family. The unsuspecting user gave the money to the bad actor, who was never to be heard from again. Prior to implementing Cequence, romance scams at Zoosk averaged $12,000 with each occurrence. Now, they are virtually eliminated, resulting in increased user confidence and strengthened brand awareness.

Account takeover and financial fraud: Another example of APIs being targeted with an automated attack involves a large financial services customer finding that attackers had targeted its mobile application login API to execute account takeovers. If successful, the bad actors could attempt to commit financial fraud by transferring funds across the Open Funds Transfer (OFX) API. OFX, of course, is the industry standard API for funds transfer within the financial services community, and as such the APIs are publicly-available and well-documented to facilitate use.

The ubiquity and stateless nature of APIs are beneficial in many ways, but they also introduce numerous challenges that traditional security technologies cannot address. By design, APIs do not have a client-side component, so traditional defense techniques like Captchas or JavaScript and mobile SDK instrumentation cannot be used elegantly to prevent an automated attack. Often, there is no corresponding browser or mobile application for redirection and cookie assignment for instrumentation. The result is that the API and associated application are left unprotected, or are protected only partially.

Contributing author: Matthew Keil, Director of Product Marketing, Cequence.

How to protect your business from COVID-19-themed vishing attacks

Cybercriminals have been using the COVID-19 pandemic as a central theme in all kinds of crisis-related email phishing campaigns. But because of the dramatic rise of the number of at-home workers, one method that has become increasingly common over the past few months are vishing attacks, i.e., phishing campaigns executed via phone calls.

COVID-19 vishing attacks

Rising success rates are the reason why vishing has become more common, and there are several factors driving this trend:

  • People are actually at home to receive calls, giving threat actors more hours to connect with live targets
  • Everyone is on high alert for information about the pandemic, stimulus checks, unemployment compensation, ways to donate to charitable organizations, and other COVID-related topics, providing attackers with an endless supply of vishing social engineering options
  • Cybercriminals conduct research and use personal information – the last four digits of a social security number, for example – to build credibility and fool their victims into thinking they are speaking with legitimate sources.

Let me expand on this last point. Modern vishing attacks use research-based social engineering to attack targets with convincing scams. How do these attackers know so much about their targets? Typically, cybercriminals obtain personally identifiable information in one of three ways:

1. Social media

Many social media profiles are not protected from public view and they serve as a treasure trove of personal information that can be used for building attacks. For example, listing your place of employment with an employee badge not only lets an attacker know where you work, but what the company badge looks like for replication purposes.

“About You” sections of social media accounts often reveal personal information that can be used for password reset fields – your favorite color, your dog’s name, or the city you were born. And detailed posts outlining work projects, professional affiliations and technologies you’re using all help build a valid pretext scenario.

2. Password dumps

There has been no shortage of public data breaches that have resulted in extensive password dumps containing usernames, email addresses and passwords of compromised accounts. Individuals often reuse passwords across different accounts, which makes it easy for attackers to hack their way in through “credential stuffing.” For example, a LinkedIn password and user email address exposed in a breach could be used to access bank or e-commerce accounts.

3. Search engines

An individual’s name, address and photo of signature can often be found online via local government public records sites. In addition, paid services exist for individuals who want to obtain additional information, such as a target’s date of birth or marital status.

Many people don’t realize how much personal information can be found via a simple online search. As a result, when an attacker uses things like the last four digits of their social security number, the town in which they live, or the names of their children, victims assume the person they are speaking to is a credible source, and they don’t think twice about divulging information that they would otherwise keep private.

Vishing is a business problem, too

On the surface, it might seem like vishing attacks are a consumer problem only. But, in reality, businesses can be impacted too – especially now, as a significant portion of employees across the country are working from home.

These employees not only have corporate information stored on their personal devices, but they also generally have remote access to internal corporate resources. Vishing attacks are designed to build relationships with employees, eventually convincing them to give away confidential information, or to click on malicious links that are sent to them by the visher, who has earned confidence as a “trusted source.” As with other social engineering attacks, the ultimate goal is to gain access to corporate networks and data, or to get other information that can be used to commit fraud.

Tips for mitigating COVID-19 vishing attacks

Mitigating the risk of vishing attacks requires a multi-faceted approach, but it should start with end user awareness and education.

As soon as possible, businesses should roll out employee training sessions (even if they’re virtual) that explain what vishing is, how cybercriminals obtain personal information, and how they’re exploiting the COVID-19 pandemic to trick victims.

They should provide basic security tips, such as keeping social media accounts private and using different passwords for different accounts, as well as best practices for responding to a real-world attack. Incorporating attack simulations into training programs can also be a great way to teach employees how to respond to a vishing campaign using defined internal processes.

Technical controls are another key component of a layered security strategy to protect employees and your business from vishing threats. Web filters, antivirus software, and endpoint detection and response solutions are examples of the types of standard security controls that should be implemented. In addition, password policies must be defined and communicated to employees. And, last but not least, multi-factor authentication can be effective in thwarting attacks, as it forces cybercriminals to crack more than one user credential to gain access to corporate systems.

Defending against vishing during the pandemic and beyond

Even though COVID-19-prompted shelter-in-place orders are lifting across the country, many organizations are maintaining work-at-home policies for the safety of their employees and because they realize the operational and financial benefits that come along with telecommuting programs. This means that protecting the remote workforce should continue to be a top priority for businesses of all sizes and defending against vishing attacks should be a core component of security strategy.

Vishers will continue to come calling long after the COVID-19 pandemic comes to an end, so it’s important to make sure remote workers – and all employees – know how to identify suspicious callers, just like they should know how to identify suspicious emails. Supplementing employee education with the proper security controls is a good starting point to keep your staff and your business safe regardless of who’s on the other end of the line.

Maintaining the SOC in the age of limited resources

With COVID-19, a variety of new cyber risks have made their way into organizations as a result of remote working and increasingly sophisticated, opportunistic threats. As such, efficiency in the security operations center (SOC) is more critical than ever, as organizations have to deal with limited SOC resources.

limited SOC resources

Limited SOC resources

The SOC is a centralized team of analysts, engineers, and incident managers who are responsible for detecting, analyzing, and responding to incidents and keeping security operations tight and resilient – even when security strategy fails. During the first 100 days of COVID-19, there was a 33.5 percent rise in malicious activity, putting increased pressure on these teams. Rapidly changing attack methods make keeping up an immense challenge.

With all of this in mind, it’s easy for the SOC to become overwhelmed and overworked. To avoid this and protect the business, it’s important to keep morale high, production efficient and automation reliance balanced on need. Read on to explore the do’s and don’ts of maintaining SOC operations throughout the pandemic.

Do: Prevent burnout before it’s too late

The SOC requires a high level of technical expertise and, because of that, the number of suitable and competent analysts holding positions in the field are scarce.

Beyond the skills shortage, the job of a SOC is made even more difficult and overwhelming by the lack of employee awareness and cybersecurity training. Untrained employees – those who don’t know how to appropriately identify a live threat – can lead to a high noise-to-signal ratio by reporting things that may not be malicious or have high click-through rates. This means organizations are not putting enough emphasis on building what could be the strongest defense for their business – the human firewall. Ninety-five percent of cyberattacks begin with human error, causing more issues than the SOC can handle.

For those that are implementing training, it’s likely they’re not seeing their desired results, meaning an uptick in employee mistakes. For one, cyber hygiene across organizations saw large deterioration by late March, with blocked URL clicks increasing by almost 56 percent. The organizations experiencing this downgrade in employee cyber resiliency should take the time to re-think their methods and find alternatives that keep their staff engaged rather than implementing irregular, intensive training with boring content just to check a box.

Coupling this with rapidly changing threat activity, the SOC is under immense pressure, which could lead to a vicious cycle where analysts leave their roles, creating open vacancies that are difficult to fill.

Don’t: Jump headfirst into automation

With limited SOC resources, one may think automated alerts and post-breach threat intelligence are the answer to ensuring proper attention is kept on an enterprise’s security.

On one hand, automation can help alleviate time spent on administrative action. For example, it can help detect threats more quickly, giving teams more time to focus on threat analysis.

However, post breach threat intelligence and automated alerts can also lead to fatigue and a lot of time spent investigating, which could be at a higher cost than the administration burden. Not to mention, machine learning can also learn bad behaviors and, in itself, be a vulnerability –threat actors can learn machine patterns to target systems at just the right time.

The SOC should therefore adopt automation and intelligence only where it makes the most sense, layering in preventive measures to reduce that fatigue. Organizations should be critical of the technologies they take on, because ultimately, a quick response can create an added burden. Instead, they should focus on improving the metrics that have a positive impact on the SOC and employees, such as a reduction in reported cases and dwell time, as well as the ratio of good-to-bad things reported. With the right training, technology, and policies, the SOC – and the business – can get the most out of its investment.

Do: Improve virtual collaboration practices

A recent (ISC)2 survey found that 90 percent of cybersecurity executives are working remotely. Like every other employee in a digitally-connected company, an organization’s SOC is also likely not in the office right now. This is a challenge, as some have become accustomed to putting their SOC, other IT teams, and the technology that they use in close proximity to one another to create a stronger, more resilient approach. This extends the SOC’s operational knowledge and creates a faster response in time of crisis.

Given the current pandemic, most teams are unable to have this physical proximity, stretching the bounds of how they operate, which could put a strain on larger business operations. This can inhibit communication and ticketing, which is seamless when seated together. For instance, folks may be working on different schedules while remote, making it hard to communicate in real-time. Remote scenarios can also deepen data silos amongst teams who aren’t in communication. These challenges increase the amount of time it takes the SOC to find and address a potential threat, widening the attack surface.

As such, organizations should be mindful and strategic about their new cross-functional operation and create new ways for teams to collaborate in this new virtual frontier. For instance, businesses should:

  • Ensure access to their enterprise: Start thinking about disaster recovery and business continuity as the tools needed to ensure security or even access to the “castle” that was once considered their enterprise.
  • Consider their tools: Adjust communication styles and interactions by adopting tools, like Microsoft Teams, Slack, or Skype, to help everyone stay in constant communication or keep the channel open during traditional working hours.
  • Focus on training: Develop training and documentation that can be used by operations teams in a consistent fashion. This could include a wiki and other tools that help with consistent analysis and response.
  • Keep operations running globally: Establish formal standups and handovers for global teams.
  • Maintain visibility through technology: Adopt SaaS technologies that enable the workforce and offer visibility to do their jobs.
  • Change the hiring approach: When hiring, realize that this is a “new” world where proximity is no longer a challenge. With the right tools and processes, business can take the chains off when hiring smart people.
  • Recognize and reward success: Morale is the most important thing when it comes to SOC success. Take breaks where needed, reward those that are helping the business succeed and drive success based on goals and metrics.

The cyber threats posed by COVID-19 and impacting the SOC are rapidly evolving. Despite current circumstances, malicious actors are not letting up and organizations continue to be challenged. Due to the limited number of SOC analysts equipped with the skills to keep organizations protected, the risk of burnout risks is high and the industry does not have the staff to fill vacant roles. With all of this in mind, SOC analysts must be supported in their roles as they work to keep businesses safe, by adopting the right technologies, processes and collaboration techniques.

How do I select a backup solution for my business?

42% of companies experienced a data loss event that resulted in downtime last year. That high number is likely caused by the fact that while nearly 90% are backing up the IT components they’re responsible for protecting, only 41% back up daily – leaving many businesses with gaps in the valuable data available for recovery.

In order to select an appropriate backup solution for your business, you need to think about a variety of factors. We’ve talked to several industry professionals to get their insight on the topic.

Oussama El-Hilali, CTO, Arcserve

select backup solutionBefore selecting a backup solution, IT leaders must ask themselves where the majority of data generated by their organization resides. As SaaS-based collaboration and storage systems grow in popularity, it’s essential to choose a backup solution that can protect their IT environment.

Many people assume cloud platforms automatically back up their data, but this largely isn’t the case. They’ll need a solution with SaaS backup capabilities in place to safeguard against cyberattacks and IT outages.

To further prevent downtime, organizations should also consider backup solutions that offer continuous replication of data. That way, in case of unplanned outages, they can seamlessly fail over to a replica of their systems, applications and data to keep the organization up and running. This is also helpful in case of a ransomware attack or other data corruption – organizations can revert to a “known good” state of their data and pick up where they left off before the incident. Generally, all backup tools should provide redundancy by using the rule of three – have at least three copies of your data, store the copies on at least two different media types, and keep at least one of those copies offsite.

Finally, it’s important to weigh the pros and cons of on-prem versus cloud-based backups. Users should keep in mind that, in general, on-prem hardware is more susceptible to data loss in the event of a natural disaster. There’s no “one size fits all” solution for every organization, so it’s best to take a holistic look at your specific needs before you start looking for a solution – and continue to revisit and update the plan as your organization evolves.

Nathan Fouarge, VP Of Strategic Solutions, NovaStor

select backup solutionWhen looking for a backup solution for your business there are a number of questions to ask to narrow down the solutions you want to look at.

Here’s what you should be prepared to answer in order to select a backup solution for your business:

  • How much downtime can you afford, or how fast do you need to be back up and running? In other words what is your restore time objective (RTO).
  • How much data am I willing to lose? In other words what is your restore point objective (RPO). Are you willing to just take daily backups so you have the possibility to lose an entire days’ worth of work or do you need a solution that can do hourly or continuous backup?
  • How long do I need to keep historical data? Do you have some compliancy requirements that makes you keep your data for a long time?
  • How much data do you have to backup and what type of data/applications do you need to back up?
  • How many copies of the data and where do you want to store it? Do you want to do the recommended 3-2-1 backup solution so 3 copies of the data. Do you want to keep all the backups locally, offsite(USB drive or replicated NAS), cloud?
  • Then the ultimate question of how much you are willing to spend for a backup solution.

Once you have all of those questions answered then you can look into what solutions fit your into what you are looking for. More than likely once you start looking for solutions that fit your criteria you will have to reevaluate some of the answers to the questions above.

Konstantin Komarov, CEO, Paragon Software

select backup solutionThe most important part is how you backup your data, not how you organize it. The key aim is to provide the safety regardless of whether you back up a single database or clone the entire system. The best practice and the most cost-effective way would be to implement “incremental backups” and replicate the data both to the local storage and to the cloud.

Incremental backup is an approach when replication is performed only to some updated part of the system or database, not the entire one. This enables to shorten the time of the backup process and amount of storage space used. Replication to both the local storage and to the cloud may guaranty the best safety of your data in case the physical disk you are baking the data up to is damaged or lost.

However, to make the backup effective and non-stop, it needs to be scheduled and managed with an application deployed on some dedicated end-point which should work side-by-side with your IT infrastructure not to slow down or prevent the entire system. So, the best decision would be to build up your own backup, using open cloud backup platforms, which consists of the ready-to-go algorithm and tools to create a solution fully adjusted to the needs of a particular business.

Ahin Thomas, VP, Backblaze

select backup solutionWhen choosing a backup solution for your business, consider three factors: optimize for remote first, sync vs. backup, and recovery.

As businesses grow, implementing a strong backup strategy is challenging, especially when access to employees can change at a moment’s notice. That’s why it’s important to have a backup solution that is easy to deploy and requires little to no interfacing with employees—your COVID-stressed IT team will thank you.

Secondly, Dropbox and Google Drive folders are not backup solutions. They require users to drop files in designated folders, and any changes made to a file are synced across every device. A good backup solution will ensure all data is backed up to the cloud, and will work automatically in the background, backing up all new or changed data.

Data recovery is the final piece of the puzzle, and most often overlooked. Data loss emergencies are stressful, so it is vitally important to understand how recovery works before you choose a solution. Make sure it’s fast, easy, and works whether you’re on or off site. And test it regularly! You never know when your coworker (aka kid) will spill a sippy cup all over your laptop.

Nigel Tozer, Solutions Director EMEA, Commvault

select backup solutionFor many organizations, the realization that their backup products are no longer fit for purpose comes as a very unwelcome discovery. Anyone arriving at this kind of crossroads faces some big decisions: one of the most frequently occurring is whether to add to what you have, or go for something new.

For anyone in that position, there are four simple considerations that can help inform decisions about backup strategy:

  • Flexibility – Make sure your backup solution supports a wider ecosystem than just what you’re using today. You don’t want it to hinder your agility or cloud adoption down the line.
  • Automation – Look for solutions where intelligent automation, even AI, can help dispense with the specialist or mundane elements of backup processes and free up busy IT teams’ time.
  • Budget – Low cost software that needs a dedupe appliance as you grow, or an appliance with a rigid upgrade path can turn out to be more costly long term – so do your research.
  • Consolidation – Many products typically means silos, wasted space and more complexity. Consolidating to a backup platform instead of multiple products can make a real difference in infrastructure savings, and reduced complexity.

Integrating a SIEM solution in a large enterprise with disparate global centers

Security Information and Event Management (SIEM) systems combine two critical infosec abilities – information management and event management – to identify outliers and respond with appropriate measures. While information management deals with the collection of security data from across silos in the enterprise (firewalls, antivirus tools, intrusion detection, etc.), event management focuses on incidents that can pose a threat to the system – from benign human errors to malicious code trying to break in.

SIEM solution

Having been in existence for over a decade now, SIEM systems have come a long way: from mere log management to integrating machine learning and analytics for end-to-end threat monitoring, event correlation, and incident response. The modern SIEM system goes way beyond collating data and incidents for security supervisors to monitor – it analyses and responds to threats in real-time, thereby reducing human intervention while also enabling a more holistic approach to information security.

But given the magnitude and complexity of the tasks performed by an SIEM solution, integrating it into the existing information security architecture of an enterprise can be daunting, especially when it comes to a large enterprise with multiple, disparate centers spread across the globe.

Common SIEM integration mistakes

Cybersecurity is a highly dynamic space and a solution that is effective today may no longer be viable tomorrow. This is exactly where SIEM integration pitfalls stem from. Deployments failing and solutions not meeting goals, in the long run, is a commonly observed problem. And when it comes to a large enterprise with a global presence, the complexity only compounds further! Here’s a look at some common mistakes that organizations commit while implementing a SIEM solution, which can later snowball into major threats.

1. Under-planned implementation

Despite a widespread awareness that SIEM solutions can be complex in nature, many organizations go about integrating one without initially defining their goals and requirements. Chances of successfully implementing a SIEM solution without proper planning are slim. Evaluating the solution at a later stage or on an ad-hoc basis only piles up the expenses that could easily have been avoided.

Moreover, out-of-the-box SIEM solutions are more generic in nature and cannot cater to the specific cybersecurity challenges of any organization. This is another reason why prior planning comes in handy so that there is enough scope for customizations and third-party integrations before implementation.

2. Implementing without a predefined scope

Implementing an SIEM solution without defining the scope is akin to building a house without a foundation. And in the case of a large multinational enterprise, implementing SIEM solutions without proper scoping is no less than causing mass destruction. The scope provides the basis for everything that follows – planning, deployment, implementation, and maturing the SIEM solution with related capabilities. It will determine the choice of solution, the architectural requirements, the necessary staffing, and the processes and procedures.

3. Rooting for the one-solution-fits-all approach

Given the large, almost comprehensive nature of a SIEM tool, it may seem tempting to try and do everything with it at once. While SIEM solutions are capable of collecting, processing and managing large amounts of data, that doesn’t mean it’s a good practice to over-stuff the solution with too many capabilities at once.

Organizations with a global presence are bound to deal with myriad and diverse use cases, each use case being distinct and requiring a different approach. Hence, SIEM use cases should be approached in a way that can set up stages of cycles to make way for continual improvements rather than taking a one-solution-fits all approach.

4. Monitoring noise

Another common mistake is approaching the SIEM solution as a log management tool, setting it to capture and store all logs from all devices and applications without discrimination, under the impression that this will give a more comprehensive and clearer view. However, instead of reducing the noise, such an exercise actually amplifies it and generates more of it.

What’s more, one can only imagine the chaos it will cause in the case of a large enterprise with a global presence. Pouring in more hay is pointless when your purpose is to find a needle in the haystack.

SIEM implementation best practices

The mistakes can be easily avoided by following a set of best practices for implementation. Every organization’s implementation will be different, but here are some steps that a CISO can consider and are crucial to the effective performance of an SIEM solution post-deployment.

1. Define the project and scope

The first step to SIEM implementation is planning the scope of the project and its timeline. This entails outlining the scope of the project, including the necessary informational, budgetary, and physical resources. Plus, companies must define their goals and identify all necessary resources in this stage. As a starting point, the CISO must consider setting up basic rules, identifying necessary compliance and policy requirements, and structuring the post-implementation SIEM management.

It is to be noted that SIEM solutions need to be connected to almost everything across the network infrastructure to achieve optimal performance. Therefore, defining log sources is recommended. Here are some basic components that can be included while scoping:

Security control logs:

  • Intrusion detection and prevention systems (IDPS)
  • Endpoint protection software
  • Data loss prevention (DLP) software
  • Threat intelligence software
  • Firewalls
  • Honeypots
  • Web filters

Network infrastructure logs:

  • Routers
  • Switches
  • Controllers
  • Servers
  • Databases
  • Internal applications

Other data points:

  • Network architecture
  • Network policy configurations
  • IT assets
2. Research products

Product research is something that will be unique to each business. However, on a broad level, there are three main informational resources that the CISO can consider before zeroing in on an SIEM.

Vendor analysis: A number of online resources and search engines can help identify the major SIEM vendors. CISOs can then contact the vendors for more information, relating to their specific situation. In addition, CISOs can also consult software analyst firms or deploy empirical testing for vendor analysis. There are many research and testing services providers out there who can generate valuable insights on markets and tools.

Product reviews: As to how product reviews help a CISO decide on an SIEM solution is self-explanatory. Websites can come in handy for CISOs to review and analyze some of the best SIEM tools out there.

Use case assessment: Assessing use cases that will pertain to the business – not just in the immediate future, but in the long run – is essential to ensuring a smooth SIEM integration. This step requires CISOs to communicate with the shortlisted vendors and understand industry-specific scenarios, case studies, and product demos.

3. Implementation planning

The next step is to outline a number of implementation procedures to ensure a smooth and effective transition. Here are a few components that CISOs should include in their plan:

Design architecture: Making a detailed design architecture helps get a clearer view of the entire implementation. Outlining all data sources related to log sources and data inputs and deploying information collectors to ensure all log sources are connected is a good starting point.

Create rules: It is critical to ensure that correlation engines are functioning with basic policies. Also, determining more customized rules to be implemented in the long term should be taken up in this stage. These rules help optimize documentation and alerting without damaging network performance. They should also be customized to meet any necessary compliance requirements.

Define process: It is advisable to put a handoff plan in place before deployment, to transfer control from the implementation team to security operations or IT management team. Plus, considering the company’s staffing capabilities is crucial to ensuring that teams can seamlessly manage the SIEM; otherwise, it will all be rendered pointless.

In addition to the aforementioned steps, it is a good idea to outline any other long-term management processes specific to the organization, such as training the staff to manage and monitor an SIEM system.

4. Deployment and review

As soon as the solution is deployed, it is necessary to take a few immediate actions to ensure smooth functioning going forward:

  • Ensure data is being collected and encrypted properly
  • Ensure all activities, logs and events are stored correctly
  • Test the system to visualize connected devices and display to those planned

Ensuring seamless functioning of the SIEM solution

Successfully implementing an SIEM solution is just the beginning. Teams should continue testing and updating the solution against the latest attack. Timely upgrades and customizations are inevitable as the threat landscape and policies keep evolving – it is the only way to keep the number of false positives in check, while also ensuring end-to-end information security to the maximum extent possible.

CISOs are critical to thriving companies: Here’s how to support their efforts

Even before COVID-19 initiated an onslaught of additional cybersecurity risks, many chief information security officers (CISOs) were struggling.

CISOs struggling

According to a 2019 survey of cybersecurity professionals, these critical data defenders were burned out. At the time, 64% were considering quitting their jobs, and nearly as many, 63%, were looking to leave the industry altogether.

Of course, COVID-19 and the ensuing remote work requirements have made the problem worse. It’s clear that companies could be facing an existential crisis to their data security and that their best defenders are struggling to stay in the fight.

The current state of CISOs

Even as they have to deal with an ever-expanding threat landscape, CISOs are managing a mounting plate of responsibilities. As companies hurdle toward digital transformation, automation, cloud computing, brand reputation, and strategic investments are falling on CISOs’ plate.

It’s easy to see why CISOs feel overwhelmed, overworked, unprepared, and underequipped. Cisco’s recent CISO survey, which combines insights from panel discussions and more than 2,800 responses from IT decision-makers, puts both quantitative and qualitative metrics to these problems.

Notably, leaders identified a workforce that is rapidly becoming remote as a top cause of stress and anxiety. Specifically, Cisco reports that “More than half (52%) told us that mobile devices are now very or extremely challenging to defend.”

By now, the cybersecurity vulnerabilities associated with remote work are well-documented, but the COVID-19 pandemic makes it clear that remote work is going to become both more prominent and more problematic in the weeks, months, and years ahead.

In the meantime, a deluge of alerts and threat notifications are causing cybersecurity fatigue, meaning leaders are “virtually giving up on proactively defending against malicious actors.” Collectively, 42% of survey respondents indicated that they were experiencing cybersecurity fatigue.

This challenge is amplified when leaders are managing multiple vendors, as “complexity appears to be one of the main causes of burnout.”

Finally, CISOs are being asked to navigate an increasingly complex threat landscape while accounting for expanding government oversight in the form of data privacy laws, which are becoming ever-more prevalent now that the pendulum has swung almost entirely toward digital discretion.

To be sure, that’s not to say that CISOs aren’t excelling. The industry is full of hard-working, talented, and ambitious people. Everyone, from MSPs to CEOs, needs to do a better job of supporting CISOs.

Supporting struggling CISOs and protecting data

1. Prioritize singularity

CISOs are struggling to manage a multi-vendor environment with disassociated solutions coming from many places. Instead, provide comprehensive endpoint data loss prevention software that accounts for a wide range of threats and offers extensive insights into a company’s data landscape.

2. Rely on automation

The vast majority of cybersecurity personnel who reported cyber fatigue experienced more than 5,000 alerts every day. The rapidly expanding capabilities of AI and machine learning have to be harnessed to reduce this onslaught of information. Many threats can be addressed with software, reducing the number of alerts that actually make it to IT personnel, allowing them to focus on the most prescient threats. It’s both better use of their time and a more sustainable way to work.

What’s more, relying on automation can help IT leaders account for a growing and apparent skills gap that leaves many departments understaffed.

3. Account for known risks

Today’s threat landscape is expansive, but some risks are more prominent than others. For instance, it’s estimated that human error is responsible, in part, for as many of 90% of all data breaches. In other words, employees collectively represent the most significant cybersecurity risk, as both accidental and malicious insiders contribute to a growing number of breaches. For instance, we’ve seen examples of

  • Employees compromising network security by engaging with phishing scams
  • Employees stealing company data to sell or leverage down the road
  • Employees accidentally sharing private information outside of appropriate channels
  • Employees accessing company data on personal devices

There is a myriad of ways that insiders compromise company data. Identify and determine cybersecurity solutions that can bolster your defenses toward the most prominent threats.

4. Communicate and prepare

Ultimately, cybersecurity isn’t just a priority for CISOs. It’s time to develop an all-in approach to data security to bring awareness and capability to every level of the company. In a real way, data security depends on each person playing an active role in the company’s defensive posture.

CISOs may be struggling, but they are immensely talented and uniquely important. It’s time to support their efforts in meaningful and tangible ways.

5 easy steps to immediately bolster cybersecurity during the pandemic

Cyber attacks have increased exponentially since the start of the pandemic, with AT&T Alien Labs Open Threat Exchange (OTX) finding 419,643 indicators of compromise (IOC) related to COVID-19 from January to March, with a 2,000% month-over-month increase from February to March.

bolster cybersecurity

Rush to bolster cybersecurity

Companies of all sizes and in all sectors have been forced to adapt to a remote work environment overnight, regardless of whether they were ready or not. As this fast-moving shift to virtual business occurred, cybercriminals also adjusted their strategy to take advantage of the expanded attack surface, with the volume of attacks up by nearly 40% in the last month and COVID-19-themed phishing attacks jumping by 500%. The current situation is an IT manager’s worst nightmare.

This new remote work environment ushers in an entirely new security landscape and in record-time. Long-term solutions can be found in zero trust models and cloud security adoption, but time is of the essence. Organizations should act now.

The following are a few short-term, easy-to-implement actions that IT managers can take now to bolster cybersecurity amid the current pandemic.

1. Apply “social distancing” to home networks

Traditionally, home Wi-Fi networks are used for less sensitive tasks, often unrelated to work: children play games on their tablet, voice assistants are activated to display the weather, and movies are streamed on smart TVs. Fast forward to today, and employees are now connecting to the office through this same network, leaving gaps for children or non-working adults who may also be accessing the internet via the same network. Lines are blurred, and so is security.

Just as social distancing is encouraged to limit the potential spread of COVID-19, the same should apply digitally to our home networks. IT managers can encourage employees to partition their home internet access. This means trying to block children and non-working adults from using the same network connection that is used to log into the office. This step alone helps prevent a tidal wave of unknown vulnerabilities.

One doesn’t need to have extensive IT skills in order to isolate a home network, which saves IT managers valuable time and resources. On the market today, there are several home and small office routers, costing around $100, that offer VLAN support, and most Wi-Fi kits offer the ability to set up a “guest” network. As an IT manager, it’s important to provide step-by-step instructions on how to set this up on common routers, while communicating the importance of taking this small step to greatly boost security.

2. Encourage the use of lightweight mobile devices

BYOD brings immense security risk. What types of malware exist on your employees’ home devices? Have they completed recent software updates? It’s a gamble not worth taking.

If possible, IT managers should provide employees with company-owned lightweight devices, like smartphones and tablets. For one, in most of the country, you can use mobile broadband capabilities to avoid home networks altogether. Additionally, these devices are designed to be managed remotely. Users are essentially teaming up with the manufacturers’ security teams in keeping the devices secure, as well as the mobile operators in ensuring a secure connection. Attach a quality keyboard to such lightweight devices, and employees will not miss their PCs.

3. Move to the cloud… now!

On-premise software is outdated and often ineffective. If your organization has not moved to the cloud yet, let this be the forcing function for that change. Customer relationship management systems, office productivity apps and even creative design platforms are all available now as SaaS offerings, and outperform their traditional software equivalents. With cloud solutions, organizations are working with the SaaS provider’s security teams to help keep vulnerabilities away.

Once employees have transitioned to lightweight devices operating SaaS applications in the cloud, the attack surface is reduced exponentially.

4. Secure employee remote access

Employees will be connecting devices to several service connections, so many that it makes it difficult to manage on an ongoing, individual basis. Invest in secure remote access tools such as a strong endpoint security solution and a cloud security gateway. This will allow IT managers to set policies and monitor company-wide activity, even while the workforce is widely dispersed.

5. Brush up on password hygiene

I’m willing to bet that employees are logging into the office right now using poor passwords. They’re inputting passwords based on their children’s names, anniversary dates or, the worst, “password123.”

IT managers need to immediately (and regularly!) teach employees how to improve their security posture. One of the easiest ways to do that is to start with password hygiene. Insist that staff create long, complex, and unique passwords for every device and connection they use to access the office. Encourage the use of password managers to keep track of all logins. Staff should also set up two-factor authentication across the board, from the CEO down to the seasonal intern. This behavioral shift costs nothing and makes it significantly harder for cybercriminals to win.

We are all vulnerable to this pandemic. IT managers traditionally shoulder a tremendous amount of responsibility, but now with a remote work environment, that burden has quadrupled. While the to-do list may look exhaustive, try to focus on a few short-term actions that will bring peace of mind and bolstered security… right now.

How to implement least privilege in the cloud

According to a recent survey of 241 industry experts conducted by the Cloud Security Alliance (CSA), misconfiguration of cloud resources is a leading cause of data breaches.

least privilege cloud

The primary reason for this risk? Managing identities and their privileges in the cloud is extremely challenging because the scale is so large. It extends beyond just human user identities to devices, applications and services. Due to this complexity, many organizations get it wrong.

The problem becomes increasingly acute over time, as organizations expand their cloud footprint without establishing the capability to effectively assign and manage permissions. As a result, users and applications tend to accumulate permissions that far exceed technical and business requirements, creating a large permissions gap.

Consider the example of the U.S. Defense Department, which exposed access to military databases containing at least 1.8 billion internet posts scraped from social media, news sites, forums and other publicly available websites by CENTCOM and PACOM, two Pentagon unified combatant commands charged with US military operations across the Middle East, Asia, and the South Pacific. It configured three Amazon Web Services S3 cloud storage buckets to allow any AWS global authenticated user to browse and download the contents; AWS accounts of this type can be acquired with a free sign-up.

Focus on permissions

To mitigate risks associated with the abuse of identities in the cloud, organizations are trying to enforce the principle of least privilege. Ideally, every user or application should be limited to the exact permissions required.

In theory, this process should be straightforward. The first step is to understand which permissions a given user or application has been assigned. Next, an inventory of those permissions actually being used should be conducted. Comparing the two reveals the permission gap, namely which permissions should be retained and which should be modified or removed.

This can be accomplished in several ways. The permissions deemed excessive can be removed or monitored and alerted on. By continually re-examining the environment and removing unused permissions, an organization can achieve least privilege in the cloud over time.

However, the effort required to determine the precise permissions necessary for each application in a complex cloud environment can be both labor intensive and prohibitively expensive.

Understand native IAM controls

Let’s look at AWS, since it is the most popular cloud platform and offers one of the most granular Identity and Access Management (IAM) systems available. AWS IAM is a powerful tool that allows administrators to securely configure access to AWS cloud resources. With over 2,500 permissions (and counting), IAM gives users fine-grained control over which actions can be performed on a given resource in AWS.

Not surprisingly, this degree of control introduces an equal (some might say greater) level of complexity for developers and DevOps teams.

In AWS, roles are used as machine identities. To grant an application-specific permission requires attaching access policies to the relevant role. These can be managed policies, created by the cloud service provider (CSP), or inline policies, created by the AWS customer.

Reign in roles

Roles, which can be assigned more than one access policy or serve more than one application, make the journey to least-privilege more challenging.

Here are several scenarios that illustrate this point.

1. Single application – single role: where an application uses a role with different managed and inline policies, granting privileges to access Amazon ElastiCache, RDS, DynamoDB, and S3 services. How do we know which permissions are actually being used? And once we do, how do we right-size the role? Do we replace managed policies with inline ones? Do we edit existing inline policies? Do we create new policies of our own?

2. Two applications – single role: where two different applications share the same role. Let’s assume that this role has access permissions to Amazon ElastiCache, RDS, DynamoDB and S3 services. But while the first application is using RDS and ElastiCache services, the second is using ElastiCache, DynamoDB, and S3. Therefore, to achieve least-privilege the correct action would be role splitting, and not simply role right-sizing. In this case, role-splitting would be followed by role right-sizing, as a second step.

3. Role chaining occurs when an application uses a role that does not have any sensitive permissions, but this role has the permission to assume a different, more privileged role. If the more privileged role has permission to access a variety of services like Amazon ElastiCache, RDS, DynamoDB, and S3, how do we know which services are actually being used by the original application? And how do we restrict the application’s permissions without disrupting other applications that might also be using the second, more privileged role?

One native AWS tool called Access Advisor allows administrators to investigate the list of services accessed by a given role and verify how it is being used. However, relying solely on Access Advisor does not connect the dots between access permissions and individual resources required to address many policy decisions. To do that, it’s necessary to dig deep into the CloudTrail logs, as well as the compute management infrastructure.

Least privilege in the cloud

Finally, keep in mind that we have only touched on native AWS IAM access controls. There are several additional issues to be considered when mapping access permissions to resources, including indirect access (via secrets stored in Key Management Systems and Secret Stores), or application-level access. That is a discussion for another day.

As we’ve seen, enforcing least privilege in the cloud to minimize access risks that lead to data breaches or service interruption can be manually unfeasible for many organizations. New technologies are emerging to bridge this governance gap by using software to automate the monitoring, assessment and right sizing of access permissions across all identities – users, devices, applications, etc. – in order to eliminate risk.