Is your organization prepared for PCI DSS 4.0?

Designed to ensure that all companies securely transmit, store or process payment card data correctly, compliance to the Payment Card Industry Data Security Standard (PCI DSS) serves a critical purpose.

PCI DSS 4

Failure to comply increases the risk of a data breach, which can lead to potential losses of revenue, customers, brand reputation and customer trust. Despite this risk, the 2020 Verizon Payment Security Report found that only 27.9% of global organizations maintained full PCI DSS compliance in 2019, marking the third straight year that PCI DSS compliance has declined.

In addition to the continued decline in compliance, the current iteration of PCI DSS (3.2.1) is expected to be replaced by PCI DSS 4.0 in mid-2021, with an extended transition period.

But as we enter the busiest shopping season of the year, in the midst of a global pandemic that has upended business practices, organizations cannot risk ignoring compliance to the existing PCI DSS 3.2.1 standard. Failure to achieve and maintain compliance creates gaps in securing sensitive cardholder data, making easy targets for cyber criminals. And with the holiday season historically known for rises in cyber-attacks, organizations that fail to stay focused on compliance will represent the highest risk amongst any organization that handles card data.

So, what do organizations need to know about PCI DSS 4.0 and how can they proactively prepare for this update?

Rising risks and what’s new

The financial services industry has always been a prime target for hackers and malicious actors. Last year alone, the Federal Trade Commission received over 271,000 reports of credit card fraud in the United States. As consumers continue to prefer online payments and debit and credit card transactions, the prevalence of card fraud will continue to rise.

The core principle of the PCI DSS is to protect cardholder data, and with PCI DSS 4.0, it will continue to serve as the critical foundation for securing payment card data. As the industry leader in payment card security, the Payment Card Industry Security Standards Council (PCI SSC) will continue evaluating how to evolve the standard to accommodate changes in technology, risk mitigation techniques, and the threat landscape.

Additionally, the PCI SSC is looking at ways to introduce greater flexibility to payment card security and compliance, in order to support organizations using a broad range of controls and methods to meet security objectives.

Overall, PCI DSS 4.0 will set out to:

  • Ensure PCI DSS continues to meet the security needs of the payments industry
  • Add flexibility and support of additional methodologies to achieve security
  • Promote security as a continuous process
  • Enhance validation methods and procedures

As consumers and organizations continue to interact and conduct more business online, the need for enforcement of the PCI DSS regulations will continue to become apparent.

Consumers are sharing Personally Identifiable Information (PII) with every transaction, and as that information is shared across networks, consumers require organizations to provide assurance that they are handling such data in a secure manner.

Once implemented, PCI DSS 4.0 will place a greater emphasis on security as a continuous process with the goal of promoting fluid data management practices that integrate with an organization’s overall security and compliance posture.

While PCI DSS 4.0 continues to undergo industry consultation prior to its final release, potential changes for organizations to keep in mind include:

  • Authentication, specific consideration for the NIST MFA/password guidance
  • Broader applicability for encrypting cardholder data on trusted networks
  • Monitoring requirements to consider technology advancement
  • Greater frequency of testing of critical controls – for example, incorporating some requirements from the Designated Entities Supplemental Validation (PCI DSS Appendix A3) into regular PCI DSS requirements

The second request for comment (RFC) period is still ongoing, it is expected that PCI DSS 4.0 will become available in mid-2021. To accommodate the budgetary and organizational changes necessary to achieve compliance, an extended transition period of 18 months and an enforcement date will be set by the PCI SSC after PCI DSS 4.0 has been published.

Making good use of this time will be critical, so organizations should develop a thorough implementation plan that updates reporting templates and forms, and any ongoing monitoring and recurring compliance validation to meet the updated requirements.

Tips for achieving PCI DSS compliance

The best piece of advice is to first ensure full compliance with the current version of the standard. This will ensure a solid baseline to work from when planning for future updates to PCI DSS. When the regulation takes effect in 2021, organizations can begin internal assessment and preparation of their network for any new requirements.

PCI DSS is already known as being one of the most detailed and prescriptive data security standards to date, and version 4.0 is expected to be even more comprehensive than its predecessor.

With millions of transactions occurring each day, organizations are already collecting, sharing and storing massive amounts of consumer data that they must protect. Even for organizations currently in compliance with PCI DSS 3.2.1, it is critical to establish a holistic view of their data management strategies to assess potential lapses, gaps and threats. To achieve this holistic view and ensure readiness for version 4.0, organizations should take the following steps:

  • Conduct a data discovery sweep – By conducting a thorough data discovery sweep of all data storage across the entire network, organizations can eliminate assumptions from their data management practices. Data discovery provides organizations with greater visibility in the strengths and vulnerabilities of the network as well as a better sense of how PII flows through all repositories including structured data, unstructured data, on premise storage and cloud storage, to ensure proper data management techniques.
  • Enact strategies that promote smart data decisions – Once an organization understands how data flows through its environment and where it’s located, they can use these fact-based insights to enact policies and strategies that prioritize data privacy. Data privacy depends on employees, so organizations must take the time to educate employees on the role they play in organizational security. This includes training and continued network data audits to ensure no customer data slips through the cracks or is forgotten.
  • Appoint a leader to drive compliance – With the average organization already adhering to 13 different compliance regulations, compliance can be overwhelming. Organizations should look to appoint a security compliance officer or internal lead to oversee ongoing compliance initiatives. This person should seek to become an expert in PCI DSS, generally including progress towards 4.0 and all other forms of compliance. Furthermore, they can become the go-to person on ensuring proper data management practices.

It’s been nearly 15 years since PCI DSS was first released, and since then, consumers and businesses have substantially increased the amount of transactions and business activities conducted online using payment cards. For this reason, the importance of the PCI DSS remains just as critical for securing data as it ever was.

The organizations that leverage the PCI DSS as a baseline to achieve ongoing awareness on the security of their data and look for proactive ways to secure their networks will be the most successful moving forward, gaining consumer and employee trust through their compliance actions.

How a move to the cloud can improve disaster recovery plans

COVID-19 and the subsequent global recession have thrown a wrench into IT spending. Many enterprises have placed new purchases on hold. Gartner recently projected that global spending on IT would drop 8% overall this year — and yet dollars allocated to cloud-based services are still expected to rise by approximately 19 percent, bucking that downward trend.

improve disaster recovery plans

Underscoring the relative health of the cloud market, IDC reported that all growth in traditional tech spending will be driven by four platforms over the next five years: cloud, mobile, social and big data/analytics. Their 2020-2023 forecast states that traditional software continues to represent a major contribution to productivity, while investments in mobile and cloud hardware have created new platforms which will enable the rapid deployment of new software tools and applications.

With entire workforces suddenly going remote all over the world, there certainly are a number of specific business problems that need to be addressed, and many of the big issues involve VPNs.

Assault on VPNs

Millions of employees are working from home, and they all have to securely access their corporate networks. The vast majority of enterprises still rely on on-premises servers to some degree (estimates range from 60% to 98%), therefore VPNs play a vital role in enabling that employee connection to the network. This comes at a cost, though: bandwidth is gobbled up, slowing network performance — sometimes to a crippling level — and this has repercussions.

Maintenance of the thousands of machines and devices connected to the network gets sacrificed. The deployment of software, updates and patches simply doesn’t happen with the same regularity as when everyone works on-site. One reason for this is that content distribution (patches, applications and other updates) can take up much-needed bandwidth, and as a result, system hygiene gets sacrificed for the sake of keeping employees productive.

Putting off endpoint management, however, exposes corporate networks to enormous risks. Bad actors are well aware that endpoints are not being maintained at the same level as pre-pandemic, and they are more than willing to take advantage. Recent stats show that the volume of cyberattacks today is pretty staggering — much higher than prior to COVID-19.

Get thee to the cloud: Acceleration of modern device management

Because of bandwidth concerns, the pressure to trim costs, and the need to maintain machines in new ways, many enterprises are accelerating their move to the cloud. The cloud offers a lot of advantages for distributed workforces while also reducing costs. But digital transformation and the move to modern device management can’t happen overnight.

Enterprises have invested too much time, money, physical space and human resources to just walk away. Not to mention, on-premises environments have been highly reliable. Physical servers are one of the few things IT teams can count on to just work as intended these days.

Hybrid environments offer a happy medium. With the latest technology, enterprises can begin migrating to the cloud and adapt to changing conditions, meeting the needs of distributed teams. They can also save some money in the process. At the same time, they don’t have to completely abandon their tried-and-true servers.

Solving specific business problems: Content distribution to keep systems running

But what about those “specific business problems,” such as endpoint management and content distribution? Prior to COVID-19, this had been one of the biggest hurdles to digital transformation. It was not possible to distribute software and updates at scale without negatively impacting business processes and without excessive cost.

The issue escalated with the shift to remote work. Fortunately, technology providers have responded, developing solutions that leverage secure and efficient delivery mechanisms, such as peer-to-peer content distribution, that can work in the cloud. Even in legacy environments, vast improvements have been made to reduce bandwidth consumption.

These solutions allow enterprises to transition from a traditional on-premises infrastructure to the cloud and modern device management at their own speed, making their company more agile and resilient to the numerous risks they encounter today. Breakthrough technologies also support multiple system management platforms and help guarantee endpoints stay secure and updated even if corporate networks go down – something that, given the world we live in today, is a very real possibility.

Disaster averted

Companies like Garmin and organizations such as the University of California San Francisco joined the unwitting victims of ransomware attacks in recent months. Their systems were seized, only to be released upon payment of millions of dollars.

While there is the obvious hard cost involved, there are severe operational costs as well — employees that can’t get on the network to do their jobs, systems must be scanned, updated and remediated to ensure the network isn’t further compromised, etc. A lot has to happen within a short period of time in the wake of a cyberattack to get people back to work as quickly and safely as possible.

Fortunately, with modern cloud-based content distribution solutions, all that is needed for systems to stay up is electricity and an internet connection. Massive redundancy is being built into the design of products to provide extreme resilience and help ensure business continuity in case part or all of the corporate network goes down.

The newest highly scalable, cloud-enabled content distribution options enable integration with products like Azure CDN and Azure Storage and also provide a single agent for migration to modern device management. With features like cloud integration, internet P2P, and predictive bandwidth harvesting, enterprises can leverage a massive amount of bandwidth from the internet to manage endpoints and ensure they always stay updated and secure.

Given these new developments precipitated and accelerated by COVID-19, as well as the clear, essential business problem these solutions address, expect to see movement and growth in the cloud sector. Expect to see an acceleration of modern device management, and despite IT spending cuts, expect to see a better, more secure and reliable, cost efficient, operationally efficient enterprise in the days to come.

The security consequences of massive change in how we work

Organizations underwent an unprecedented IT change this year amid a massive shift to remote work, accelerating adoption of cloud technology, Duo Security reveals.

security consequences work

The security implications of this transition will reverberate for years to come, as the hybrid workplace demands the workforce to be secure, connected and productive from anywhere.

The report details how organizations, with a mandate to rapidly transition their entire workforce to remote, turned to remote access technologies such as VPN and RDP, among numerous other efforts.

As a result, authentication activity to these technologies swelled 60%. A complementary survey recently found that 96% of organizations made cybersecurity policy changes during the COVID-19, with more than half implementing MFA.

Cloud adoption also accelerated

Daily authentications to cloud applications surged 40% during the first few months of the pandemic, the bulk of which came from enterprise and mid-sized organizations looking to ensure secure access to various cloud services.

As organizations scrambled to acquire the requisite equipment to support remote work, employees relied on personal or unmanaged devices in the interim. Consequently, blocked access attempts due to out-of-date devices skyrocketed 90% in March. That figure fell precipitously in April, indicating healthier devices and decreased risk of breach due to malware.

“As the pandemic began, the priority for many organizations was keeping the lights on and accepting risk in order to accomplish this end,” said Dave Lewis, Global Advisory CISO, Duo Security at Cisco. “Attention has now turned towards lessening risk by implementing a more mature and modern security approach that accounts for a traditional corporate perimeter that has been completely upended.”

Additional report findings

So long, SMS – The prevalence of SIM-swapping attacks has driven organizations to strengthen their authentication schemes. Year-over-year, the percentage of organizations that enforce a policy to disallow SMS authentication nearly doubled from 8.7% to 16.1%.

Biometrics booming – Biometrics are nearly ubiquitous across enterprise users, paving the way for a passwordless future. Eighty percent of mobile devices used for work have biometrics configured, up 12% the past five years.

Cloud apps on pace to pass on-premises apps – Use of cloud apps are on pace to surpass use of on-premises apps by next year, accelerated by the shift to remote work. Cloud applications make up 13.2% of total authentications, a 5.4% increase year-over-year, while on-premises applications encompass 18.5% of total authentications, down 1.5% since last year.

Apple devices 3.5 times more likely to update quickly vs. Android – Ecosystem differences have security consequences. On June 1, Apple iOS and Android both issued software updates to patch critical vulnerabilities in their respective operating systems.

iOS devices were 3.5 times more likely to be updated within 30 days of a security update or patch, compared to Android.

Windows 7 lingers in healthcare despite security risks – More than 30% of Windows devices in healthcare organizations still run Windows 7, despite end-of-life status, compared with 10% of organizations across Duo’s customer base.

Healthcare providers are often unable to update deprecated operating systems due to compliance requirements and restrictive terms and conditions of third-party software vendors.

Windows devices, Chrome browser dominate business IT – Windows continues its dominance in the enterprise, accounting for 59% of devices used to access protected applications, followed by macOS at 23%. Overall, mobile devices account for 15% of corporate access (iOS: 11.4%, Android: 3.7%).

On the browser side, Chrome is king with 44% of total browser authentications, resulting in stronger security hygiene overall for organizations.

UK and EU trail US in securing cloud – United Kingdom and European Union-based organizations trail US-based enterprises in user authentications to cloud applications, signaling less cloud use overall or a larger share of applications not protected by MFA.

Most companies have high-risk vulnerabilities on their network perimeter

Positive Technologies performed instrumental scanning of the network perimeter of selected corporate information systems. A total of 3,514 hosts were scanned, including network devices, servers, and workstations.

network perimeter vulnerabilities

The results show the presence of high-risk vulnerabilities at most companies. However, half of these vulnerabilities can be eliminated by installing the latest software updates.

The research shows high-risk vulnerabilities at 84% of companies across finance, manufacturing, IT, retail, government, telecoms and advertising. One or more hosts with a high-risk vulnerability having a publicly available exploit are present at 58% of companies.

Publicly available exploits exist for 10% of the vulnerabilities found, which means attackers can exploit them even if they don’t have professional programming skills or experience in reverse engineering. However, half of the vulnerabilities can be eliminated by installing the latest software updates.

The detected vulnerabilities are caused by the absence of recent software updates, outdated algorithms and protocols, configuration flaws, mistakes in web application code, and accounts with weak and default passwords.

Vulnerabilities can be fixed by installing the latest software versions

As part of the automated security assessment of the network perimeter, 47% of detected vulnerabilities can be fixed by installing the latest software versions.

All companies had problems with keeping software up to date. At 42% of them, PT found software for which the developer had announced the end of life and stopped releasing security updates. The oldest vulnerability found in automated analysis was 16 years old.

Analysis revealed remote access and administration interfaces, such as Secure Shell (SSH), Remote Desktop Protocol (RDP), and Network Virtual Terminal Protocol (Internet) TELNET. These interfaces allow any external attacker to conduct bruteforce attacks.

Attackers can bruteforce weak passwords in a matter of minutes and then obtain access to network equipment with the privileges of the corresponding user before proceeding to develop the attack further.

Ekaterina Kilyusheva, Head of Information Security Analytics Research Group of Positive Technologies said: “Network perimeters of most tested corporate information systems remain extremely vulnerable to external attacks.

“Our automated security assessment proved that all companies have network services available for connection on their network perimeter, allowing hackers to exploit software vulnerabilities and bruteforce credentials to these services.

“Even in 2020, there are still companies vulnerable to Heartbleed and WannaCry. Our research found systems at 26% of companies are still vulnerable to the WannaCry encryption malware.”

Minimizing the number of services on the network perimeter is recommended

Kilyusheva continued: “At most of the companies, experts found accessible web services, remote administration interfaces, and email and file services on the network perimeter. Most companies also had external-facing resources with arbitrary code execution or privilege escalation vulnerabilities.

“With maximum privileges, attackers can edit and delete any information on the host, which creates a risk of DoS attacks. On web servers, these vulnerabilities may also lead to defacement, unauthorized database access, and attacks on clients. In addition, attackers can pivot to target other hosts on the network.

“We recommend minimizing the number of services on the network perimeter and making sure that accessible interfaces truly need to be available from the Internet. If this is the case, it is recommended to ensure that they are configured securely, and businesses install updates to patch any known vulnerabilities.

“Vulnerability management is a complex task that requires proper instrumental solutions,” Kilyusheva added. “With modern security analysis tools, companies can automate resource inventories and vulnerability searches, and also assess security policy compliance across the entire infrastructure. Automated scanning is only the first step toward achieving an acceptable level of security. To get a complete picture, it is vital to combine automated scanning with penetration testing. Subsequent steps should include verification, triage, and remediation of risks and their causes.”

PCI SSC updates standard for payment devices to protect cardholder data

The PCI Security Standards Council has updated the standard for payment devices to enable stronger protections for cardholder data.

pts poi standard

Meeting the accelerating changes of payment device technology

The PCI PIN Transaction Security (PTS) Point-of-Interaction (POI) Modular Security Requirements 6.0 enhances security controls to defend against physical tampering and the insertion of malware that can compromise card data during payment transactions.

Updates are designed to meet the accelerating changes of payment device technology, while providing protections against criminals who continue to develop new ways to steal payment card data.

“Payment technology is advancing at a rapid pace,” says Emma Sutcliffe, SVP, Standards Officer at PCI SSC. “The changes to this standard will facilitate design flexibility for payment devices while advancing the standard to help mitigate the evolving threat environment.”

Protecting PINs

Established to protect PINs and the cardholder data stored on the card (on magnetic stripe or the chip of an EMV card) or used in conjunction with a mobile device, PTS POI Version 6.0 reorganizes the requirements and introduces changes that include:

  • Restructuring modules into Physical and Logical, Integration, Communications and Interfaces, and Life Cycle to reflect the diversity of devices supported under the standard and the application of requirements based upon their individual characteristics and functionalities.
  • Limiting firmware approval timeframes to three years to help ensure ongoing protection against evolving vulnerabilities.
  • Requiring devices that accept EMV enabled cards to support Elliptic Curve Cryptography (ECC) to help facilitate the EMV migration to a more robust level of cryptography.
  • Enhancing support for the acceptance of magnetic stripe cards in mobile payments using solutions that follow the Software-Based PIN Entry on COTS (SPoC) Standard.

“Feedback from our global stakeholders, along with changes in payments, technology and security is driving the changes to this standard,” said Troy Leach, SVP at PCI SSC. “It’s with participation from the payments industry that the Council is able to produce standards that are relevant and enhance global payment card security.”

FIRST releases updated coordination principles for Multi-Party Vulnerability Coordination and Disclosure

The Forum of Incident Response and Security Teams (FIRST) has released an updated set of coordination principles – Guidelines for Multi-Party Vulnerability Coordination and Disclosure version 1.1.

FIRST coordination principles

Stakeholder roles and communication paths

The purpose

The purpose of the Guidelines is to improve coordination and communication across different stakeholders during a vulnerability disclosure and provide best practices, policy and processes for reporting any issues across multiple vendors.

It is targeted at vulnerabilities that have the potential to affect a wide range of vendors and technologies at the same time.

Previous best practices, policy and process for vulnerability disclosure focused on bi-lateral coordination and did not adequately address the current complexities of multi-party vulnerability coordination.

Factors such as a vibrant open source development community, the proliferation of bug bounty programs, third party software, supply chain vulnerabilities, and the support challenges facing CSIRTs and PSIRTs are just a few of the complicating aspects.

Art Manion, Vulnerability Analysis Technical Manager, CERT Coordination Center said: “As software development becomes more complex and connected to supply chains, coordinated vulnerability disclosure practices need to evolve. The updated Guidelines are a step in that evolution, deriving guidance and principles from practical use cases.”

The content

The Guidelines for Multi-Party Vulnerability Coordination and Disclosure contains a collection of best current practices that consider more complex as well as typical real-life scenarios that go beyond a single researcher reporting a vulnerability to a single company.

The Guidance includes:

  • Establish a strong foundation of processes and relationships
  • Maintain clear and consistent communications
  • Build and maintain trust
  • Minimize exposure for stakeholders
  • Respond quickly to early disclosure
  • Use coordinators when appropriate
  • Multi-Party Disclosure Use Cases

FIRST Chair, Serge Droz said: “The Guidelines for Multi-Party Vulnerability Coordination and Disclosure is an important step towards a better and more responsible way of managing vulnerabilities.

“It was crucial that these Guidelines were created in tandem with key stakeholders who may be affected by multi-party vulnerabilities. I am proud that FIRST was able to bring these stakeholders together to work on this very important document.”

Researchers design a tool to identify the source of errors caused by software updates

We’ve all shared the frustration when it comes to errors – software updates that are intended to make our applications run faster inadvertently end up doing just the opposite. These bugs, dubbed in the computer science field as performance regressions, are time-consuming to fix since locating software errors normally requires substantial human intervention.

errors software updates

Schematic illustrating how Muzahid’s deep learning algorithm works. The algorithm is ready for anomaly detection after it is first trained on performance counter data from a bug-free version of a program.

To overcome this obstacle, researchers at Texas A&M University, in collaboration with computer scientists at Intel Labs, have now developed a complete automated way of identifying the source of errors caused by software updates.

The deep learning algorithm

Their algorithm, based on a specialized form of machine learning called deep learning, is not only turnkey, but also quick, finding performance bugs in a matter of a few hours instead of days.

“Updating software can sometimes turn on you when errors creep in and cause slowdowns. This problem is even more exaggerated for companies that use large-scale software systems that are continuously evolving,” said Dr. Abdullah Muzahid, assistant professor in the Department of Computer Science and Engineering.

“We have designed a convenient tool for diagnosing performance regressions that is compatible with a whole range of software and programming languages, expanding its usefulness tremendously.”

How does it work?

To pinpoint the source of errors within a software, debuggers often check the status of performance counters within the central processing unit. These counters are lines of code that monitor how the program is being executed on the computer’s hardware in the memory, for example.

So, when the software runs, counters keep track of the number of times it accesses certain memory locations, the time it stays there and when it exits, among other things. Hence, when the software’s behavior goes awry, counters are again used for diagnostics.

“Performance counters give an idea of the execution health of the program,” said Muzahid. “So, if some program is not running as it is supposed to, these counters will usually have the telltale sign of anomalous behavior.”

However, newer desktops and servers have hundreds of performance counters, making it virtually impossible to keep track of all of their statuses manually and then look for aberrant patterns that are indicative of a performance error. That is where Muzahid’s machine learning comes in.

By using deep learning, the researchers were able to monitor data coming from a large number of the counters simultaneously by reducing the size of the data, which is similar to compressing a high-resolution image to a fraction of its original size by changing its format. In the lower dimensional data, their algorithm could then look for patterns that deviate from normal.

The versatility of the algorithm

When their algorithm was ready, the researchers tested if it could find and diagnose a performance bug in a commercially available data management software used by companies to keep track of their numbers and figures. First, they trained their algorithm to recognize normal counter data by running an older, glitch-free version of the data management software.

Next, they ran their algorithm on an updated version of the software with the performance regression. They found that their algorithm located and diagnosed the bug within a few hours. Muzahid said this type of analysis could take a considerable amount of time if done manually.

In addition to diagnosing performance regressions in software, Muzahid noted that their deep learning algorithm has potential uses in other areas of research as well, such as developing the technology needed for autonomous driving.

“The basic idea is once again the same, that is being able to detect an anomalous pattern,” said Muzahid. “Self-driving cars must be able to detect whether a car or a human is in front of it and then act accordingly. So, it’s again a form of anomaly detection and the good news is that is what our algorithm is already designed to do.”