I recently watched my team composing some music for a cybersecurity awareness project and using it to take an immersive Dark Web Mission Control Centre to a whole new level. It got me thinking about what we – i.e., the cybersecurity industry – can learn from music. Music is a massive part of popular culture and is universally loved across the globe. Conversely, cybersecurity is inapproachable and abstract to most people and is often seen … More
The post Strike a chord: What cybersecurity can learn from music appeared first on Help Net Security.
If you’re in a hands-on cybersecurity role that requires some familiarity with code, chances are good that you’ve had to think about SQL injection over and over (and over) again. It’s a common vulnerability that – despite being easily remedied – continues to plague our software and, if left undetected before deployment, provides a small window of opportunity to would-be attackers. December 2020 marked SQL injection’s 22nd birthday (of sorts). Despite this vulnerability being old … More
The post SQL injection: The bug that seemingly can’t be squashed appeared first on Help Net Security.
2020 has ended with a stunning display of nation-state cyber capabilities. The Kremlin’s SVR shocked the cybersecurity industry and U.S. government with its intrusions into FireEye and the U.S. Office of the Treasury by way of SolarWinds, revealing only traces of its long-term, sophisticated campaigns. These breaches are reminders that no organization is immune to cyber risk or to hacking. Every company is subject to the same reality: compromise is inevitable. While many companies are … More
After a spate of cyberattacks on organizations involved in developing COVID-19 vaccines, there are growing concerns that hackers are taking aim at the distribution systems currently ramping up. IBM recently shone a light on a phishing scheme targeting organizations involved in the cold storage supply chains necessary to deliver the delicate vaccines. It advised healthcare organizations to be on high alert for more similar attacks. Hospitals will play a central role in storing and distributing … More
The post Hospitals under siege: 5 ways to boost cybersecurity as the COVID-19 vaccine rolls out appeared first on Help Net Security.
Incydr is Code42’s new SaaS data risk detection and response solution, which enables security teams to mitigate file exposure and exfiltration risk without disrupting legitimate collaboration. Code42 focuses on the problems related to the massive “work from home” shift, i.e., the fact that many different collaboration tools are being used within global enterprises. While those tools allow people to collaborate more efficiently, they also allow them to share sensitive company data. Unfortunately, traditional security tools … More
The post Review: Code42 Incydr – SaaS data risk detection and response appeared first on Help Net Security.
It’s safe to assume that pretty much everyone is ready to move on from 2020. Between the COVID-19 pandemic, political battles, and social unrest, this has been a stressful year in so many ways. It has also been a very active year for cybercriminals and fraudsters who have preyed on people’s fears and vulnerabilities to push new scams. They’ve spoofed government health sites to trick people into clicking on malware links. They’ve targeted food delivery … More
The post 2020 set the stage for cybersecurity priorities in 2021 appeared first on Help Net Security.
In this article I’ll consider next year’s data security landscape with a focus on the two key issues you need to have on your planning agenda. Of course, how the pandemic plays out will have a huge say on tactical questions ranging from budget to manpower to project priorities – but these long-term strategic trends will impact IT organizations well beyond 2021. The “bring your own” genie will leave the bottle Over the last decade, … More
The post The need for zero trust security a certainty for an uncertain 2021 appeared first on Help Net Security.
In 2020, cybersecurity became a business problem for every industry, as well as the U.S. government. According to a new report by the Aspen Cybersecurity Group, there are several opportunities for the new presidential administration to increase cybersecurity efforts and awareness to create a more resilient digital infrastructure. Organizations like the Cybersecurity and Infrastructure Security Agency (CISA), local and state governments, and the private sector have all taken significant steps to mitigate and respond to … More
The post U.S. cybersecurity: Preparing for the challenges of 2021 appeared first on Help Net Security.
The recent SolarWinds software supply chain breach is a clear indication that strong OT cybersecurity is a must-have in today’s threat environment. Waterfall’s technologies have long enabled integration between OT networks and enterprise networks without the risk of any attack getting back into the protected network. The time has come to deploy this class of hardware-enforced protection universally on OT networks. The SolarWinds breach shows only that the cyber threat environment continues to worsen. The … More
The main story of 2021 won’t be the disease, but the vaccine. With three effective, promising vaccines in development as of November, COVID-19 (and its treatment) will continue causing major shifts in nearly every facet of our lives. That is particularly true for cybersecurity. Our sector transformed in 2020, and we have still not finished adapting to the virus. Here are five ways that COVID-19 and its vaccines will cause cybersecurity to change in 2021: … More
Virtualization has brought a dramatic level of growth and advancement to technology and business over the years. It transforms physical infrastructure into dedicated, partitioned virtual machines (VM) that deliver critical cloud applications and services to multiple customer organizations using the same hardware. While one server would previously be tasked with one OS install, today’s servers can host multiple instances of Windows or Linux running concurrently to increase system utilization. Client virtualization is the next step … More
The post 5 reasons IT should consider client virtualization appeared first on Help Net Security.
Passwords are a source of many security risks, with recent LastPass research revealing IT teams are spending five hours a week on average dealing with password-related issues. A passwordless login experience, on the other hand, provides employees with a user-friendly and secure way of accessing their accounts and devices – no matter where they are. This eliminates many password-related risks, such as password reuse or failing to change default credentials, which means improved security and … More
For those working remotely during the pandemic, changes to how work is done have significantly increased stress levels – and when we’re stressed, we’re more likely to make mistakes that result in sensitive data being inadvertently put at risk. Our 2020 Outbound Email Security Report revealed that stressed and tired employees are behind 37% of the most serious data leaks – caused by all-too-common culprits, including adding an incorrect recipient to an email, attaching the … More
The post Stress levels are rising, but that doesn’t have to mean more security incidents appeared first on Help Net Security.
From increasingly sophisticated threats to the mad concoction of on-premise and cloud solutions that comprise most organizations’ IT infrastructure and the plethora of new IoT devices and a highly distributed workforce, enterprises and government agencies face a wide range of challenges that make cyber threat detection and response more difficult than ever before.
Simultaneously, the cybersecurity industry is facing a shortage of skilled workers, putting increasing strain on enterprise security teams and their ability to effectively identify and respond to threats.
Considering this contextual backdrop, Security Orchestration, Automation and Response (SOAR) products offer an appealing solution, promising efficiencies in detecting and responding to threats. However, organizations need to understand how these solutions can also introduce new challenges if not implemented correctly. Without proper planning, organizations adopting security automation tools can fall victim to common missteps that quickly lead to less efficiency and a weaker security posture.
When introducing SOAR tools to an organization, the most important first step isn’t how the solution is configured, or the act of connecting it to other systems, or even determining what data sources it needs to integrate. The most important first step is having mature security processes on which to build. Simply taking the pre-built playbooks or automation scripts that SOAR vendors provide and plugging them into your environment will seldom yield the desired results.
Start by examining the processes and procedures your organization’s security team already has in place and identify the tasks that consume the majority of team member’s time. These will be the key use cases where SOAR can provide the most benefit by applying efficiency, speed and consistency. For example, in many organizations this might include processes such as looking up asset information or reviewing additional data points related to a security alert or a reported phishing email.
It could be the process of pulling data on what’s running in memory on a device and adding that detail to an existing incident management ticket to assist in an investigative decision. Or it could be isolating hosts or blocking an IP range on the network in order to stop a threat from spreading. These are all common use cases that can be effectively automated, but only if the underlying processes and procedures are mature and well-defined.
Different categories of automation require different levels of maturity in the underlying processes. If you plan to introduce any type of automated response – such as automated threat containment – you must be absolutely certain that the underlying processes are mature, or it could have a greater than intended impact the availability of systems and people. Mature processes are those that have been proven, measured, inspected and performed iteratively at volume that you can understand and account for any variance in the way it works.
In a mature process you also understand how actions will impact downstream systems. Otherwise, if you apply automation to a process that is not mature and an edge case occurs, your automation may cause your own denial of service, potentially impacting critical systems.
One of the best areas to begin applying automation is within an organization’s security operations center (SOC) to speed the process of pulling together threat intelligence and asset information from several different sources to aid in the investigative and triage process for threats. Because it involves information gathering rather than performing a response, this scenario introduces less risk while still providing significant gains in efficiency by quickly bringing data from various sources into one view for SOC analysts to interpret and make decisions.
A related area that can benefit from SOAR is incident management where applying SOAR tools to the process of gathering information, artifacts and audit logs related to incidents can not only speed responses but also help improve process maturity by ensuring consistent documentation and record collection is taking place during the incident management process.
I often encounter security professionals who have an idea of what they want to automate, and they jump straight into applying SOAR solutions around that idea – this can work, but often does not scratch the surface of the potential power of SOAR for the organization. Even when starting with a single use case, I recommend mapping out the idea into a process flow, then turning that process flow into a playbook for automation that can run in a supervised mode. That way, you have an iterative plan for how to mature that process before you run it in an autonomous mode (or iteratively less supervised modes).
Introducing SOAR to an organization’s security operations is rarely a simple undertaking, and the complexity should not be underestimated. If you don’t plan for adequate resources and expertise up-front to implement this technology, you won’t get the return on investment (ROI) you are expecting, and certainly not on the timeline expected.
The SOAR implementation must also be managed and maintained over time, as it will need to continually evolve as your environment changes. Organizations that don’t have the staff or the skill sets on their security team to adequately maintain the SOAR implementation may benefit from a consultative and managed services model that can keep it functioning properly over time.
Ultimately, automation should be viewed as an outcome amplifier for the security team – not as a replacement for the security team itself. With proper planning, you can identify the most mature processes that your team performs often and map out detailed playbooks for automating them. These will introduce the least risk and provide the most benefit by creating greater efficiencies, enhancing your security team’s skills and freeing up their time to perform higher-level functions.
You can’t swing a virtual bat without hitting someone touting the value of artificial intelligence (AI) and machine learning (ML) technologies to transform big data and human expertise.
A new generation of businesses is promising to accelerate and automate decision making. Most countries, including the United States, view AI technology as critical to retaining or establishing global business leadership. The promise and value of AI and ML rank equal or higher to other intellectual property or corporate secrets within an organization.
Despite this tremendous value, AI/ML assets can’t be protected – especially when in use. This creates intellectual property risks that can give pause to both entrepreneurs and investors. The result is a growing sense of urgency to create better controls to protect the raw data, training algorithms, run-time inference engines and results generated – both from competitors and from malicious actors.
The good news is that recent hardware advances built into the latest advanced microprocessors and incorporated into high-end servers can be utilized to protect AI/ML assets, data and other sensitive applications – even during runtime. Harnessing secure enclaves to close the loop on AI/ML vulnerabilities resolves these security concerns and enables AI/ML to be deployed even more widely, effectively, and safely.
What is a secure enclave?
A secure enclave is a private region of memory whose contents are protected by hardware-grade encryption and hardware isolation techniques. Data in an enclave cannot be read or modified by any entity outside the enclave itself, even if the host is physically compromised. From a business perspective, enclaves enable owners to tightly control how, when, and where data (including software in use) is created, used, and retired.
Secure enclaves leverage new hardware-level security capabilities present in modern CPUs and cloud computing platforms, such as Intel, AMD, AWS, Microsoft Azure, and others. Additional software can leverage these raw features to both create and enclave in which applications, which often require enclaved storage and communications, can operate unmodified.
What do secure enclaves protect?
More things than you might think. AI and ML both leverage and create a number of data sets, each of which have different security requirements.
First is the raw data that ML algorithms consume in order to learn. This often includes such highly sensitive data as personal medical or financial records with immense potential for industry. Ideally, this kind of data would be leveraged without the potential of any kind of exposure. In today’s computing environment, that’s practically impossible, because using, moving, or storing data (even when encrypted) implicitly exposes it.
Secure enclaves eliminate this exposure, while data is in use, and while it’s transported and stored as well. This facilitates the ability to use multiple data sets from multiple parties to train the AI engine with zero risk of exposure. Imagine the benefits this could bring to health care or insurance providers and even to government. It enables greater access to data for analysis while virtually guaranteeing data privacy. That means smarter AI.
Proprietary training engines used to process this raw data also need protection. In many cases, the mountain of data used to build experience can’t be moved; the learning engine has to be moved to the distant data mountain. Wherever that software is stored or used exposes it to theft, potentially indefinitely, when it runs on untrusted hardware.
But running and storing machine learning algorithms within the confines of a secure enclave assures that proprietary learning techniques are kept in the hands of their owners, even when those algorithms run in insecure environments. Simple policy and controls can dictate where, how, and when the software can be used down to specific, uniquely identified CPUs.
Similarly, the resulting proprietary interference/expert engine, which makes decisions based on new (often real-time) data must also be protected. The expertise and experience infused is core to the value of the business that created it. Enclaves can play a key role in not just protecting against software exposure and theft, but in controlling licensing and distribution as well. The same policy controls can potentially limit operations, such as to specific CPUs, clouds, and time periods, which protects the seller’s investment.
Interestingly, these same enclave protections secure customer data as well, because they assure that data processed by an enclaved application isn’t accessible by anyone anywhere.
Finally, there are the conclusions that the software generates. Data generated within an enclave is secured and tightly controlled by default. Policy controls must explicitly be implemented to allow exposure, if exposure is ever required.
Greater security means greater opportunity
Secure enclave protection doesn’t just obviate the data and IP risks associated with developing and protecting commercial AI/ML capabilities. It also creates opportunities to build new and more powerful capabilities from broader data sets. Secure enclaves offer a solid path for businesses to significantly reduce the risk associated with these potentially huge new opportunities.
While DevOps culture has brought innovation to the industry and transformed the way software is developed, it’s arguably an outdated concept.
The truth is that DevOps has allowed for new features and applications to be rolled out at such speed that traditional security practices simply aren’t able to keep up. The other problem is that the security testing that does occur (e.g., penetration testing and code reviews), usually takes place towards the end of the DevOps lifecycle, which is often too late.
This is where DevSecOps comes in. The main idea behind DevSecOps is to incorporate security far earlier into the software lifecycle development process. Unfortunately, when speed is everything, developers are often reluctant to prioritize security – so how do you make DevSecOps stick with developers?
Don’t just “shift left”
The popular notion of “shifting left” doesn’t go far enough as it implies the process begins without security in mind. In order to positively engage developers and arm them with the skills and knowledge they need to code securely, the industry needs to adopt a “start left” mentality. This is where security is considered an absolute priority from day one by everyone from the C-suite down to the developers writing the code.
Developers are the key to DevSecOps success and as a result, their approach to security must be consistent. Every line of code that the engineers write, from the very start of the development process needs to be created with security in mind. However, getting developers to simply change their habits isn’t always as easy.
The primary responsibility of a developer is building software that is functional, innovative and delivered at speed. Not only is security frequently not considered a priority at the coding level, but it’s even seen by many as tedious and an obstruction to delivering creative and original features.
So, where do you start? First, you need to understand where your developers sit on the security skills spectrum. A great way to benchmark developer skills is by running live secure coding “tournaments” with your team using simulated scenarios. This is not only a way to get developers more engaged in the idea of secure code but will allow you to understand what further training each developer needs to ensure everyone’s learning is tailored to their skill set.
Skip the classroom training
DevSecOps should be viewed as an ongoing methodology and a process, rather than a quick fix. It’s a culture as much as a set of techniques and adopting it requires skilled people, change management and an ongoing commitment from all parties involved. Providing employees with the right tools and training is a key step in this movement towards the rise of security developers, yet we can’t expect traditional teaching methods such as classroom-based learning is unlikely to change a developer’s mindset on secure coding.
Training in secure coding is essential but will only be effective if it’s relevant and demonstrates how security can fit seamlessly into a developer’s day job. Whilst tournaments are a great place to start, it’s the day-to-day training that will shift the needle. One successful way of doing this is through hyper-relevant gamified learning platforms that are integrated with day-to-day tasks.
If the developer is actively led through how coding and security can be combined into the same offering, without taking them away from their job, they are more likely to continue with best practices in the future. For those looking to start on a smaller scale, there are free training apps that teach essential secure coding skills across different coding languages.
Organizations need to not only provide their developers with the necessary tools for training, but also ensure that developers are given adequate time and incentives to make it a priority. This could be by incorporating security into team and individual job descriptions and KPIs or creating reward structures that encourage further training.
Show them the money
The benefits of developers integrating security into their work extend not just to the successful delivery of the software, but also to the developers themselves. Writing secure code may seem like an obstacle at first but will become easier with time and will create efficiencies in the long term as there will be fewer bugs to remedy.
Additionally, consistently writing secure code will ensure that the developer is producing a higher standard and quality of work, and in turn, will become highly valued and in-demand.
Upskilling in security will ultimately provide developers with more prestigious and lucrative job opportunities as secure coding continues to become a highly sought-after skill.
Ensuring developers understand the benefits of learning to code securely not just for the company, but for themselves too, is key to establishing a security-first mindset.
While DevOps was innovative when it was first introduced, the industry has now moved past this concept and DevSecOps is here to stay. However, to be successful security needs to be viewed as a priority by all involved from the very beginning, and this starts with developers.
Organizations first need to find out where the developers’ security skills currently sit and provide bespoke, gamified training to keep them engaged. This needs to be done whilst highlighting the benefits that upskilling in security can provide for the individual developers.
Ultimately, getting DevSecOps right is about built-in security, collaboration between developers and AppSec teams, and a cultural shift towards a deeper understanding of the importance that needs to be placed on security as a wider societal issue.
2020 was a “transformative” year, a year of adaptability and tackling new challenges. As we worked with organizations to deploy mission-critical data security, cryptography was comparatively stable. What cryptographic trends will gain traction in 2021?
The cloud will play a bigger role, especially in financial services
The movement toward broad acceptance of cloud-based encryption and key management will accelerate as more of the pieces come together. Organizations have become more aggressive with the cloud, especially financial services organizations that are moving toward payment processing in the cloud.
Cloud providers are offering more robust and flexible security to meet the demands of organizations who want to retain control of the keys and avoid being vendor locked. Cloud providers have been listening to enterprises about their concerns around data security practices and are making forward strides with data access, key management, and data retention policies.
Homomorphic encryption will be part of your vocabulary
Homomorphic encryption allows for data to remain encrypted while it is being processed and manipulated. Homomorphic encryption could be used to secure data stored in the cloud or in transit. This gives organizations the ability to use data — such as doing analytics on your customer base — without compromising the integrity of the data as a whole.
BYOE adoption will increase
Bring Your Own Encryption (BYOE) will increase. BYOE is the next evolution of organizations being able to determine the level of control they want when it comes to managing their data security policies.
For example, what happens if an organization gets subpoenaed and its cloud provider turns its files to the authorities? If the organization controlled its keys and could do client-side encryption on-premises, the data would be useless. There will likely be a big catalyst event whereby a company goes, “Whoa — what do you mean, a third party can release my information over to a legal authority?”
Encryption + key management, critical with shorter certificate lifecycles
Organizations need both encryption and key management to be tighter than ever. As the industry moves to one-year certificates, organizations are managing shorter digital certificate schedules. It’s ever important to keep track of expiration dates and automation will play a big role.
To improve their security postures, organizations will emphasize bringing key management up to the same level as their encryption programs. What happens if you have deployed good policies, you deployed good encryption, but you deployed poor key management?
Cryptography will be significant in DevSecOps, especially for code signing
Getting tools that DevOps needs to secure its infrastructure — without slowing it down — will be critical. Looking at key management, hardware security modules (HSMs), crypto, and third-party monitoring tools, organizations will emphasize giving DevOps teams what they need to integrate security and quickly identify and troubleshoot trouble areas.
The goal will be to take away the pain points while expanding the use of encryption within the organization. When it comes to code signing, HSMs play a critical role. Code signing certificates, secure key generation, and certificate storage should be centralized and automated, natively integrating with CI/CD systems.
Manufacturers of long-term devices to embrace crypto agility
There has been a lot of talk in 2020 about quantum computers breaking current cryptography. In 2021, manufacturers of devices — satellites, cars, weapons, medical devices — that will be used for 10 to 20 years, will be smart to embrace quantum-safe cryptography. A crypto-agile solution could entail implementing hybrid certificates: signing them with conventional asymmetric encryption now but incorporating enough flexibility so they will transition smoothly to counteract the quantum computing threat when the time comes.
Whether it’s the cloud and organizations retaining control of the keys, BYOE and homomorphic encryption, DevSecOps embracing cryptography, or hybrid certificates for crypto agility, two themes stand out:
- Encryption and key management: you can’t have one without the other
- Shorter certificate lifecycles require more attention to key management than ever
We’re in for an exciting year ahead!
The final Patch Tuesday of the year is upon us and what a year it has been. Forcing many changes this year, the pandemic has impacted the way we conduct both security and IT operations. But even with the need to support remote operations and new applications that enable coordinated communication, one important aspect has not changed – the need to focus on security risk.
It’s easy to get consumed with troubleshooting performance issues, updating applications to provide the latest features, and other similar day-to-day activities, which can result in losing track of maintaining security of our systems.
In this monthly column, I focus on Microsoft updates and some of the more commonly used applications that require frequent security releases such as Adobe Reader, Google Chrome, Mozilla Firefox, etc. But we need to keep in mind that periodic updates are being released for all the applications we use and many of those updates include critical security fixes for vulnerabilities that are being exploited.
Very few (if any) of us are in a position to instantly update all the systems in our organizations, so we need to prioritize what needs to be updated first, and that should be driven by risk.
Risk is an interesting concept, because determining if one system is at a higher security risk than another can depend on many factors, which vary not only from company to company, but may change across departments within the same company.
We think about risk in general terms with regards to the importance of the system to the company’s business, the vulnerability state of the system, and the threat to the system. Each of these can be further broken down into factors of importance for the company. For example, we think of vulnerability state in relationship to factors such as patch state, configuration state, password compliance, user privileges, etc. These are just a few of the factors in one small area that can be used for risk determination.
Many companies and tools are available to help you, or maybe you have your own process already in place to determine risk and prioritize system updates.
Coming back to vulnerabilities in software and the need to patch, I’d like to point out a recent report from the NSA which itemizes a series of vulnerabilities being actively exploited. You’ll notice a wide range of vulnerabilities. Several like the Netlogon, have been in the news.
A wide range of impacted software, operating systems, VPNs and other security products are included as well. Please review and carefully consider this information as part of your next risk assessment as you prioritize your December updates.
December 2020 Patch Tuesday forecast
- Expect a smaller but standard set of Microsoft operating system updates this month. We should see the usual monthly rollup and security-only patches for the older operating systems, including the extended security updates (ESU) for Windows 7 and Server 2008. Windows 10 will include the latest 20H2 update. These updates should be smaller, in terms of CVEs, because we had the Thanksgiving holiday here in the US limiting development time. Office, Microsoft 365, and the associated SharePoint server updates will be included as well.
- Adobe released updates for Acrobat and Reader as part of APSB20-67 this week, so there shouldn’t be anything new next week. We may see a final update to Adobe Flash Player as it reaches end-of-life. Be on the lookout if you require Flash in your environment.
- Nothing is expected from Apple next week. A security update for iTunes was released mid-November and an iCloud update was issued this week. We could see a security update for macOS Big Sur later this month in advance of the holidays, the last update was in mid-November.
- Google Chrome was updated to 87.0.4280.88 for Windows, Mac and Linux this week, but we should always expect new updates each week.
- Mozilla Thunderbird was updated this week, so a Firefox and Firefox ESR update will be coming soon.
It looks like a light December Patch Tuesday to wrap up the year. If you’ve been struggling to keep up, you may want to reassess your prioritization and make sure you have characterized your risk properly.
What began as a two-week remote working environment, due to COVID-19 has now stretched past the nine-month mark for many. The impact of telework on organizations can be felt across departments, including IT and security, which drove the almost overnight digital transformation that swept across the globe.
While organizations across various sectors were faced with the challenge of maximizing their telework posture, those in government services had the extra burden of supporting employees who needed remote access to classified information.
The technology investments spurred by the pandemic also left organizations open to new and increasing threats, with KPMG reporting that “more than four in ten (41 percent) of organizations have experienced increased [cybersecurity] incidents mainly from spear phishing and malware attacks.”
So, while organizations have always been encouraged to evaluate their security posture, patch their VPNs, and prioritize Zero Trust architectures, the pandemic forced them to accelerate the adoption of these measures and evaluate their security posture more seriously. In fact, KPMG also found that most CIOs believe the pandemic has permanently accelerated digital transformation and the adoption of emergent technologies.
By observation, this digital transformation and security transition has happened in what can be defined as three stages, originating when the pandemic first hit in March, spanning through the rest of 2020 and into 2021.
Stage 1 – Acclimating employees to their new remote workspace
Many organizations had to figure out how to increase capacity for critical technologies like VPN. While large consulting firms and IT services companies generally had the technology and procedures in place to make the transition, government and financial institutions were much further behind. With both industries operating in an environment not conducive to telework pre-pandemic, IT leaders had to onboard large amounts of employees onto the VPN network – in some cases going from 10,000 employees on a VPN to 150,000.
Updating technology to accommodate that scale is no easy feat and other hurdles like supply chain issues – e.g., technology coming from foreign nations that were already in lockdown – presented unexpected obstacles. Lessons learned from this pertain to having a disaster and response plan as well as understanding that you might have to build in more time to effectively solve these types of issues.
Stage 2 – Investing in new tech
Once companies could better support their remote workforce, they needed to further understand the additional controls needed to continue providing a secure remote work infrastructure in the long term. In response to this need, there were significant spikes (as much as 80% according to Okta) in the usage of tools like multi-factor authentication as organizations began to rethink the way employees should access networks.
There has also been an increase in DNS being added to the roster of “easy to implement” security tech geared towards a distributed workforce.
Stage 3 – Developing a permanent remote IT infrastructure
As organizations currently undergo planning and budget allocation for 2021, they are looking to invest in more permanent solutions. IT teams are trying to understand how they can best invest in solutions that will ensure a strong security posture.
There’s also a greater importance in starting to understand the greater need for complete visibility into the endpoint, even as devices are operating on remote networks. Policies are being created around how much work should actually be done on a VPN and by default creating more forward-looking permanent policies and technology solutions.
But as security teams embrace new tools for security and operations to enable continuity efforts, it also generates new attack vectors. COVID-19 has presented the opportunity for the IT community to evaluate what can and can’t be trusted, even when operating under Zero Trust architectures. For example, some of the technologies, like VPN, can undermine what they were designed for.
At the beginning of the pandemic, CISA issued a warning around the continued exploitation of specific VPN vulnerabilities. CISA conducted multiple incident response engagements at U.S. government and commercial entities where malicious cyber threat actors have exploited CVE-2019-11510—an arbitrary file reading vulnerability affecting VPN appliances—to gain access to victim networks.
Although the VPN provider released patches for CVE-2019-11510 in April 2019, CISA observed incidents where compromised Active Directory credentials were used months after the victim organization patched their VPN appliance.
This exploitation was a textbook example of cybercriminals adapting their attack methodologies to the increased use and scale of new technologies for remote workers. This concentrated adversarial effort caused security teams to reevaluate the tools they have put into place, and the scale at which they have done so. The four areas that security teams are putting a critical focus on include:
- The best process for reducing remote access to sensitive data
- The identification gap between commercial and classified data
- The security of collaboration tools across an organization
- Visibility of endpoints, even when they’re not on my network
At the end of the day, security is a journey, not a destination – what might have worked prior to the pandemic needed to best suit the evolving threat environment. But just because you have a security solution in place, doesn’t mean that won’t become your next exploitation. It’s imperative for security teams to continuously advise their organizations on the changing threat landscape, always looking to stay one step ahead of the attacker.
As organizations grapple with stage three of addressing their security posture, they must get inside the mindset of today’s cybercriminals who are working around the clock to maliciously exploit new technologies and workflows implemented by companies today.
Cyber attacks are on the rise during this year of uncertainty and chaos. Increased working from home, online shopping, and use of social platforms to stay connected and sane during this year have provided criminals with many attack avenues to exploit.
To mitigate the threat to their networks, systems and assets, many organizations perform some type of annual cybersecurity awareness education, as well as phishing simulations. Unfortunately, attackers are quick to adapt to changes while employees’ behavior changes slowly. Without a dramatic shift in how we educate employees about cybersecurity, all industries are going to see a rise in breaches and costs.
Changing the way people learn about cybersecurity
The average employee still doesn’t think about cybersecurity on a regular basis, because they haven’t been taught to “trust but verify,” but to “trust and be efficient.” But times are changing, and employees must be reminded on a daily basis and be aware that they (and the organization) are constantly under attack.
In the 1950s, there was a real push to increase industrial workplace safety. Worker safety and the number of days on a job site without an incident were made top of mind for all employees. How did they manage to force this shift? Through consistent messaging, with diverse ways of communicating, and by using daily reminders to ingrain the idea of security within the organization and change how it functioned.
Hermann Ebbinghaus, a German psychologist whose pioneering research on memory led to the discovery of forgetting and learning curves, explained that without regular reminders that keep learning in mind, we just forget even what’s important. One of the main goals of training must be to increase retention and overcome people’s natural tendency to forget information they don’t see as critical.
Paul Frankland, a neuroscientist and a senior fellow in CIFAR‘s Child & Brain Development program, and Blake Richards, a neurobiologist and an associate fellow in CIFAR’s Learning in Machines & Brains program, proposed that the real goal of memory is to optimize decision-making. “It’s important that the brain forgets irrelevant details and instead focuses on the stuff that’s going to help make decisions in the real world,” they said.
Right now, cybersecurity education is lost and forgotten in most employees’ brains. It has not become important enough to help them make better decisions in real-world situations.
A different kind of training is needed to become truly “cyber secure” – a training that keeps the idea of cybersecurity top of mind and part of the critical information retained in the brain.
Microlearning and gamification
Most organizations are used to relatively “static” training. For example: fire safety is fairly simple – everyone knows where the closest exit is and how to escape the building. Worker safety training is also very stagnant: wear a yellow safety vest and a hard hat, make sure to have steel toed shoes on a job site, etc.
The core messages for most trainings don’t evolve and change. That’s not the case with cybersecurity education and training: attacks are ever-changing, they differ based on the targeted demographic, current affairs, and the environment we are living in.
Cybersecurity education must be closely tied to the value and mission of an organization. It must also be adaptable and evolve with the changing times. Microlearning and gamification are new ways to help encourage and promote consistent cybersecurity learning. This is especially important because of the changing demographics: there are currently more millennials in the workforce than baby boomers, but the training methods have not altered dramatically in the last 30 years. Today’s employee is younger, more tech-savvy and socially connected. Modern training needs to acknowledge and utilize that.
Microlearning is the concept of learning or reviewing small chunks of information more frequently and repeating information in different formats. These variations, repetitions, and continued reminders help the user grasp and retain ideas for the long-term, instead of just memorizing them for a test and then forgetting them.
According to Eddinghaus, four weeks after a one-time training only 20 percent of the information originally learned is retained by the learner. Microlearning can change those numbers and increase retention to 80 or 90 percent.
Gamification amplifies specific game-playing elements within the training to include competition, points accumulation, leaderboards, badges, and battles. Gamification blends with microlearning by turning bite-sized chunks of learning into neurochemical triggers, releasing dopamine, endorphins, oxytocin, and serotonin. These chemicals help reduce stress and anxiety (sometimes associated with learning new material), increase „feel good sensations“ and feelings of connection.
Gamification increases the motivation to learn as well as knowledge recall by stimulating an area of the brain called the hippocampus. From a business perspective, 83% of employees who “receive gamified training feel motivated, while 61% of those who “receive non-gamified training feel bored and unproductive.”
Other reports indicate that companies who use gamification in their training have 60% higher engagement and find it enhances motivation by 50%. Combining microlearning with gamification helps create better training outcomes with more engaged, involved employees who remember and use the skills learned within the training.
The bad guys don’t stop learning and trying new things, meaning the good guys must, too.
Cybersecurity is increasingly central to the existence of an organization, but it’s fairly new, rapidly evolving, and often a source of fear and uncertainty in people. No one wants to admit their ignorance and yet, even cyber experts have a hard time keeping up with the constant changes in the industry. A highly supported microlearning program can help keep employees current and empower them with key decision-making knowledge.
Designed to ensure that all companies securely transmit, store or process payment card data correctly, compliance to the Payment Card Industry Data Security Standard (PCI DSS) serves a critical purpose.
Failure to comply increases the risk of a data breach, which can lead to potential losses of revenue, customers, brand reputation and customer trust. Despite this risk, the 2020 Verizon Payment Security Report found that only 27.9% of global organizations maintained full PCI DSS compliance in 2019, marking the third straight year that PCI DSS compliance has declined.
In addition to the continued decline in compliance, the current iteration of PCI DSS (3.2.1) is expected to be replaced by PCI DSS 4.0 in mid-2021, with an extended transition period.
But as we enter the busiest shopping season of the year, in the midst of a global pandemic that has upended business practices, organizations cannot risk ignoring compliance to the existing PCI DSS 3.2.1 standard. Failure to achieve and maintain compliance creates gaps in securing sensitive cardholder data, making easy targets for cyber criminals. And with the holiday season historically known for rises in cyber-attacks, organizations that fail to stay focused on compliance will represent the highest risk amongst any organization that handles card data.
So, what do organizations need to know about PCI DSS 4.0 and how can they proactively prepare for this update?
Rising risks and what’s new
The financial services industry has always been a prime target for hackers and malicious actors. Last year alone, the Federal Trade Commission received over 271,000 reports of credit card fraud in the United States. As consumers continue to prefer online payments and debit and credit card transactions, the prevalence of card fraud will continue to rise.
The core principle of the PCI DSS is to protect cardholder data, and with PCI DSS 4.0, it will continue to serve as the critical foundation for securing payment card data. As the industry leader in payment card security, the Payment Card Industry Security Standards Council (PCI SSC) will continue evaluating how to evolve the standard to accommodate changes in technology, risk mitigation techniques, and the threat landscape.
Additionally, the PCI SSC is looking at ways to introduce greater flexibility to payment card security and compliance, in order to support organizations using a broad range of controls and methods to meet security objectives.
Overall, PCI DSS 4.0 will set out to:
- Ensure PCI DSS continues to meet the security needs of the payments industry
- Add flexibility and support of additional methodologies to achieve security
- Promote security as a continuous process
- Enhance validation methods and procedures
As consumers and organizations continue to interact and conduct more business online, the need for enforcement of the PCI DSS regulations will continue to become apparent.
Consumers are sharing Personally Identifiable Information (PII) with every transaction, and as that information is shared across networks, consumers require organizations to provide assurance that they are handling such data in a secure manner.
Once implemented, PCI DSS 4.0 will place a greater emphasis on security as a continuous process with the goal of promoting fluid data management practices that integrate with an organization’s overall security and compliance posture.
While PCI DSS 4.0 continues to undergo industry consultation prior to its final release, potential changes for organizations to keep in mind include:
- Authentication, specific consideration for the NIST MFA/password guidance
- Broader applicability for encrypting cardholder data on trusted networks
- Monitoring requirements to consider technology advancement
- Greater frequency of testing of critical controls – for example, incorporating some requirements from the Designated Entities Supplemental Validation (PCI DSS Appendix A3) into regular PCI DSS requirements
The second request for comment (RFC) period is still ongoing, it is expected that PCI DSS 4.0 will become available in mid-2021. To accommodate the budgetary and organizational changes necessary to achieve compliance, an extended transition period of 18 months and an enforcement date will be set by the PCI SSC after PCI DSS 4.0 has been published.
Making good use of this time will be critical, so organizations should develop a thorough implementation plan that updates reporting templates and forms, and any ongoing monitoring and recurring compliance validation to meet the updated requirements.
Tips for achieving PCI DSS compliance
The best piece of advice is to first ensure full compliance with the current version of the standard. This will ensure a solid baseline to work from when planning for future updates to PCI DSS. When the regulation takes effect in 2021, organizations can begin internal assessment and preparation of their network for any new requirements.
PCI DSS is already known as being one of the most detailed and prescriptive data security standards to date, and version 4.0 is expected to be even more comprehensive than its predecessor.
With millions of transactions occurring each day, organizations are already collecting, sharing and storing massive amounts of consumer data that they must protect. Even for organizations currently in compliance with PCI DSS 3.2.1, it is critical to establish a holistic view of their data management strategies to assess potential lapses, gaps and threats. To achieve this holistic view and ensure readiness for version 4.0, organizations should take the following steps:
- Conduct a data discovery sweep – By conducting a thorough data discovery sweep of all data storage across the entire network, organizations can eliminate assumptions from their data management practices. Data discovery provides organizations with greater visibility in the strengths and vulnerabilities of the network as well as a better sense of how PII flows through all repositories including structured data, unstructured data, on premise storage and cloud storage, to ensure proper data management techniques.
- Enact strategies that promote smart data decisions – Once an organization understands how data flows through its environment and where it’s located, they can use these fact-based insights to enact policies and strategies that prioritize data privacy. Data privacy depends on employees, so organizations must take the time to educate employees on the role they play in organizational security. This includes training and continued network data audits to ensure no customer data slips through the cracks or is forgotten.
- Appoint a leader to drive compliance – With the average organization already adhering to 13 different compliance regulations, compliance can be overwhelming. Organizations should look to appoint a security compliance officer or internal lead to oversee ongoing compliance initiatives. This person should seek to become an expert in PCI DSS, generally including progress towards 4.0 and all other forms of compliance. Furthermore, they can become the go-to person on ensuring proper data management practices.
It’s been nearly 15 years since PCI DSS was first released, and since then, consumers and businesses have substantially increased the amount of transactions and business activities conducted online using payment cards. For this reason, the importance of the PCI DSS remains just as critical for securing data as it ever was.
The organizations that leverage the PCI DSS as a baseline to achieve ongoing awareness on the security of their data and look for proactive ways to secure their networks will be the most successful moving forward, gaining consumer and employee trust through their compliance actions.