The Administrative Office (AO) of the U.S. Courts has revealed on Wednesday that it is investigating whether sealed U.S. court records had been accessed by the SolarWinds attackers. In related news, SolarWinds has hired former CISA director Chris Krebs and Stanford Uni professor and former Facebook CSO Alex Stamos to help them recover from the hack that lead to compromises of a considerable number of businesses (including FireEye and Microsoft) and US government agencies. The … More
The post Sealed U.S. court records possibly accessed by SolarWinds attackers appeared first on Help Net Security.
Welcome to the New Year, where we believe most organizations will continue to work through their digital transformation practices. These updated practices heavily impact IT and business leaders who need to expedite their migration to public clouds and in many situations minimize their physical data center footprint. With that comes numerous challenges, including data privacy and security. With the COVID pandemic shaping how businesses have adapted to shelter-in-place and remote staff, we will see a … More
In this article I’ll consider next year’s data security landscape with a focus on the two key issues you need to have on your planning agenda. Of course, how the pandemic plays out will have a huge say on tactical questions ranging from budget to manpower to project priorities – but these long-term strategic trends will impact IT organizations well beyond 2021. The “bring your own” genie will leave the bottle Over the last decade, … More
The post The need for zero trust security a certainty for an uncertain 2021 appeared first on Help Net Security.
The migration toward subscription-based services via the SaaS business model isn’t new this year — it’s part of a larger shift away from on-premises datacenters, applications, etc., that has been underway for years. The pandemic accelerated the shift, boosting SaaS subscriptions as companies looked for virtual collaboration and meeting tools. What is new on a larger scale is the way employees interact with business applications, and that has implications for IT departments worldwide. As a … More
Data transparency allows people to know what personal data has been collected, what data an organization wants to collect and how it will be used. Data control provides the end-user with choice and authority over what is collected and even where it is shared. Together the two lead to a competitive edge, as 85% of consumers say they will take their business elsewhere if they do not trust how a company is handling their data. … More
The post How do I select a data control solution for my business? appeared first on Help Net Security.
For those working remotely during the pandemic, changes to how work is done have significantly increased stress levels – and when we’re stressed, we’re more likely to make mistakes that result in sensitive data being inadvertently put at risk. Our 2020 Outbound Email Security Report revealed that stressed and tired employees are behind 37% of the most serious data leaks – caused by all-too-common culprits, including adding an incorrect recipient to an email, attaching the … More
The post Stress levels are rising, but that doesn’t have to mean more security incidents appeared first on Help Net Security.
As many companies continue to grapple with a remote workforce, overall employee security measures become more critical, especially as many are relying on personal devices and networks for work. Manage company security The online survey, conducted by The Harris Poll on behalf of Dashlane among over 1,200 employed U.S. Americans, sheds light on how employees view and manage company security, and reveals they aren’t necessarily taking the security of their work accounts as seriously as … More
The pandemic has accelerated digital transformation for 88% of global organizations. However, this increase in cloud adoption may leave business data insecure, Trend Micro reveals.
Accelerated cloud migration
“But the survey findings also highlight the challenges remaining with understanding security in the cloud. Cloud adoption is not a ‘set it and forget it’ process, but takes ongoing management and strategic configuration to make the best security decisions for your business.”
Customers are responsible for securing their own data
The survey confirms a simple misconception that can lead to serious security consequences. While cloud infrastructure is secure, customers are responsible for securing their own data – which is the basis of the Shared Responsibility Model for cloud.
92% of respondents say they are confident they understand their cloud security responsibility, but 97% also believe their cloud service provider (CSP) offers sufficient data protection.
Of those surveyed, only 55% of respondents use third-party tools to secure their cloud environments. This suggests that there may be significant coverage gaps and confirms that the shared responsibility is not understood.
The research has found that misconfigurations are the number one risk to cloud environments, which can happen when companies don’t know their part of the Shared Responsibility Model.
Organizations confident in their cybersecurity posture
The surveyed organizations seem to be confident in their cybersecurity posture in the cloud, as:
- 51% claim the accelerated cloud migration has increased their focus on security best practices
- 87% believe they are fully or mostly in control of securing their remote work environment
- 83% believe they will be fully or mostly in control of securing their future hybrid workplace
Despite this confidence, many respondents also admitted to experiencing security related challenges:
- 45% said that security is a “very significant” or “significant” barrier to cloud adoption
- Setting consistent policies (35%), patching (33%), and securing traffic flows (33%) were cited as the top three day-to-day operational headaches of protecting cloud workloads
- Data privacy (43%), staff training (37%) and compliance (36%) were reported as significant barriers in migrating to cloud-based security tools
“The good news is that by using smart, automated security tools, organizations can migrate to the cloud headache-free, ensuring the privacy and safety of their data and overcoming skills shortages as they do,” Nunnikhoven added.
Security solutions for cloud environments rated most important to responding organizations were network protection (28%), cloud security posture management (26%) and cloud access security broker (19%) tools.
More than 45 million medical images – including X-rays and CT scans – are left exposed on unprotected servers, a CybelAngel report reveals.
The analysts discovered millions of sensitive images, including personal healthcare information (PHI), were available unencrypted and without password protection.
No need for a username or password
The analysts found that openly available medical images, including up to 200 lines of metadata per record which included PII (personally identifiable information; name, birth date, address, etc.) and PHI (height, weight, diagnosis, etc.), could be accessed without the need for a username or password. In some instances login portals accepted blank usernames and passwords.
“The fact that we did not use any hacking tools throughout our research highlights the ease with which we were able to discover and access these files,” says David Sygula, Senior Cybersecurity Analyst at CybelAngel.
“This is a concerning discovery and proves that more stringent security processes must be put in place to protect how sensitive medical data is shared and stored by healthcare professionals. A balance between security and accessibility is imperative to prevent leaks from becoming a major data breach.”
Todd Carroll, CybelAngel CISO further commented, “Medical centers work with a vast, interconnected web of third-party providers and the cloud is an essential platform for sharing and storing data. However, gaps in security, such as this, present a huge risk, both for the individuals whose data is compromised and the healthcare institutions that are governed by regulations to protect patients’ data.
“The health sector has faced unprecedented challenges this year, however the security and privacy of their patients’ most personal records must be protected, to prevent highly confidential data falling into the wrong hands.”
Security risks of publicly accessible images
The report highlights the security risks of publicly accessible images containing highly personal information including ransomware and blackmail. Fraud is a particular risk, as this type of imagery fetches a premium on the dark web.
Simple steps that healthcare facilities can take to safeguard the way they share and store data including to:
- Determine if pandemic response exceeds your security policies: Ad hoc NAS devices, file-sharing apps and contractors may take data beyond your ability to enforce access controls.
- Ensure proper network segmentation of connected medical imaging equipment: Minimize any exposure critical diagnostic equipment and supporting systems have to wider business or public networks.
- Conduct real-world audit of third-party partners: Assess which parties may be unmanaged or not in compliance with required policies and protocols.
One of the most notable ballot propositions impacting the privacy and cybersecurity world during the US 2020 election was the passage of the California Privacy Rights Act (CPRA).
Predominantly considered an updated version of 2018’s California Consumer Privacy Act (CCPA), the CPRA incorporates several changes other than the highly touted establishment of the California Privacy Protection Agency (CPPA).
Not only does the CPRA incorporate several changes that might place a burden on small retailers, it also focuses more specifically on cybersecurity, hinting at the future of privacy and security legislation.
What new duties does the CPRA impose?
The new iteration of the California law specifically incorporates data security and integrity requirements in several places. The changes filter across CPRA’s fifty-three pages. When brought together, they show a shift towards making the CPRA a hybrid privacy-security regulation.
The first mention occurs in section 100, which requires that businesses collecting personal information “shall implement reasonable security procedures and practices.” This new language highlights the deeply intertwined relationship between security and privacy. The CCPA hinted at security controls, but the CPRA outright requires them.
This new mandate aligns with the following addition of “security and integrity” in the definitions section:
- the ability: (1) of a network or an information system to detect security incidents that compromise the availability, authenticity, integrity, and confidentiality of stored or transmitted personal information; (2) to detect security incidents, resist malicious, deceptive, fraudulent, or illegal actions, and to help prosecute those responsible for such actions; and (3) a business to ensure the physical safety of natural persons.
This definition reinforces proactive cybersecurity monitoring and threat detection as important to ensuring privacy. Specifically, the “to help prosecute those responsible” indicates that organizations who must comply with CPRA need to have appropriate forensic documentation that will give them the ability to work with law enforcement.
How does the CPRA change the definition of data collection?
From a purely academic position, the new definitions of consent, dark patterns, and cross-context behavioral advertising indicate that the CPRA looks to the future of data collection technologies.
The definition of consent specifically states:
- acceptance of a general or broad term of use or similar document that contains descriptions of personal information processing along with other, unrelated information, does not constitute consent. Hovering over, muting, pausing, or closing a given piece of content does not constitute consent. Likewise, agreement obtained through use of dark patterns does not constitute consent.
Both of these notifications could be considered broad terms and conditions. Additionally, both contain personal information processing along with “other, unrelated information” such as the marketing assets the user wants to download.
However, the CPRA goes further in the definitions section to include marketing technologies that gather user intent data. Many websites use “heatmaps” that collect information on where users click, what videos they watch or pause, and what areas they hover over. For example, tools such as Decibel and Hotjar are behavior analytics tools that give insight into what content users click through to, whether they get distracted by non-clickable elements, and whether they respond to opt-ins. The CPRA’s language indicates that businesses will need to obtain consent before collecting this information.
The CPRA goes yet another step further, defining “dark patterns” as “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by the regulation.” Dark patterns are marketing ploys that try to leverage users’ emotions against them, such as email request boxes with buttons that say, “No thanks, I don’t want a discount today.” Under the CPRA, these would be considered non-compliant tactics.
Finally, the CPRA covers all its privacy bases by including the following definition of cross-context behavioral advertising:
- targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity across businesses, distinctly-branded websites, applications, or services, other than the business, distinctly-branded website, application, or service with which the consumer intentionally interacts.
In other words, if a consumer looks to buy something from the Gap, the Gap cannot use that information to target advertising for the Banana Republic’s clothing.
Consumer businesses will need to specifically delineate their consumer data collection repositories and be more proactive about the way in which they position their digital marketing strategies.
How the CPRA impacts the data supply chain
CPRA also tackles the data supply chain, giving specific directions on what and how service providers and contractors fit into the privacy puzzle. Sections 105, 121, and 130 all reference these third-party data organizations that, when aggregated, create a series of contractual requirements across the data supply chain.
First, under Section 105, “Consumers’ Right to Delete Personal Information,” the CPRA clarifies that service providers are only beholden to business with whom they contracted, not consumers. Second, the clause creates a waterfall approach for deleting personal data. Businesses need to tell their service providers and contractors who in turn need to contact their service providers and contractors. Presumably, this waterfall continues down the data supply stream until no more additional contracted parties remain.
Second, CPRA established section 121, a new provision not in the CCPA. This section gives consumers the right to limit how businesses use their data and requires businesses to push those limitations downstream as well. Fundamentally, this provision means that consumers can now create accounts for services, such as purchasing through a business owned application, but limit the way that data is used to that single case.
Finally, under section 130, the CPRA clarifies service provider and contractor responsibilities focusing on contractual obligations. Service providers and contractors need to respond only to requests as provided by the businesses with whom they contract. This section reinforces the distance between consumers and a business’s service providers and contractors.
What can we hypothesize about the direction CPRA takes data privacy and security?
Fundamentally, CPRA gives a lot of insight into the way that data security and privacy increasingly intertwine. The CPRA no longer hints at the interconnection but specifically speaks to data security best practices. It additionally goes further than other regulations by requiring businesses to provide data security event information that helps track cybercriminals after an incident occurs.
More importantly, CPRA’s clarifications create a morass of requirements that make data retrieval difficult. These requirements enforce data minimization by placing undue burdens on businesses and the data supply chain when responding to consumer requests. For example, Section 130(3)(B)(ii) now requires businesses to provide consumers, upon request, with “the specific pieces of personal information obtained.”
Originally, under CCPA, businesses needed to share the categories of information. By requiring them to supply the specific pieces of personal information, businesses that need to respond to consumers now need to think more carefully about the data they collect. If the “pieces of data” collected come from website heatmaps, then businesses need to be able to segregate that data out if a consumer requests it.
In short, many of these new requirements force businesses to think more carefully about the information they collect. If a business needs to furnish data upon customer request, it needs to know the specific pieces of information it collects, not just the categories. Since this will increase the operational costs associated with responding to these requests, the CPRA fundamentally gives businesses two options. Collect all the data but pay the operational costs when responding to consumer requests or limit data collection as much as possible to reduce the operational costs of responding to consumer requests.
By January 2020, at least three states had prepared new privacy legislation based on the CCPA. As data privacy and security professionals look to the future of privacy regulations, the CPRA creates new fundamental requirements that states, and the US federal government may use to strengthen consumer data rights.
Each year seems to come with more cyber threats, “bad actors,” ransomware and data breaches. The security industry is on fire right now with technology providers continuing to innovate and develop new ways to help organizations defend against all these threats. However, not all of the security budget should be spent on prevention – organizations need to invest in a key IT trend in 2021: cyber resilience.
No matter how much investment is made in traditional security elements like firewalls and DLP, data breaches will continue to occur and organizations must remain operational, even during a crisis. Cyber resilience is the concept that an organization must be prepared if or when a breach occurs – how do they get back up and running with minimal disruption to the business?
How did we get here?
The world has become incredibly dependent on technology and cloud computing, which is triggering a rise in cybercrime and, as a result, positioning cyber security as a hot topic for organizations everywhere.
Cyber resilience has begun to enter the mainstream, as the focal point turns from just securing the borders to making sure business operations can bounce back after an attack, through cyber resilience practices. The goals here are to ensure that network and IT systems data is protected and can be recovered in the event of a data breach.
In 2021, security vendors will be in a race to deliver next-generation tools and processes — an additional layer of defense — to safeguard businesses a step further. Encryption, key management and cyber resilience frameworks will emerge as everyday strategies to address compromised data, for IT security teams globally.
The end goal will be to protect data, reduce or eliminate loss, and meet the growing list of regulatory compliance requirements, like HIPAA, PCI-DSS, GLBA, NERC, FERC, GDPR and new regulations like the CCPA in California.
Other key 2021 security trends
While cyber resilience will be one of the focus areas in next year’s landscape, several other themes will be prominent for IT managers next year. A shortlist of the top five are below:
- Zero trust architectures solidify. The quick shift to more people working remotely has exposed home network environments which are oftentimes less secure and more exposed than corporate networks. This will continue to force organizations to think beyond securing only within the walls of the enterprise. Zero trust architectures will evolve beyond the hype to create real-world security offerings that enhance the “moat and wall” paradigm, rather than replace it.
- Confidential computing will mature as more trusted execution environment (TEE) technologies emerge. All three of the big IaaS vendors (AWS, Azure, Google Cloud) are already building TEE offerings as the final frontier of data protection. In turn, data-in-use protection will become required by emerging roles and technologies within the enterprise.
- Data security hits CxO primetime. Data security will no longer be the purview of just the CISO but move partially into the hands of the chief data officer and the chief privacy officer. Confidential computing will help facilitate this move as new operating budgets will be used to provide greater transparency around what data can be used and by whom. For example, aggregate data may be offered to third party analytics platforms for use in forecasting.
- Adoption of new encryption tech emerges ahead of 2020’s predicted curve. Newer data protection technologies, such as homomorphic encryption, will be adopted sooner than predicted as real-world use cases, like voting protection, demand solutions sooner rather than later.
- The “separation of lock and key” becomes a requirement. In the event an encryption key is lost, data cannot be restored in any way. IT teams everywhere will adopt the separation of encryption locks (the encryption) and keys (digital keys) as a best practice for data security.
The ramifications of last year’s global pandemic will continue to drive unprecedented digital transformation. Better, stronger security solutions that were previously unavailable will hit the street. Not only will new technologies emerge to lockdown corporate data, security as a whole will be positioned as a key initiative for 2021 at the executive level.
Organizations will embrace new edge and remote technologies to further extend worker productivity and implement more security practices like data encryption to further safeguard the distributed workforce of the future.
Against the backdrop of intensifying cyber conflicts and the rapidly evolving threat landscape, a new wave of techno-nationalism is being trumpeted from almost every corner of the world.
The U.K. just announced it will ban the installation of Huawei 5G gear by the end of September 2021 and the FCC rejected a petition from ZTE asking for reconsideration of their finding that the Chinese company is a national security threat to communications networks. Meanwhile, ByteDance is trying to meet the requirements of both the U.S. government and China’s new Export Control Law so that TikTok can continue to exist in the U.S.
The U.S. is also pushing to persuade countries like Brazil to shun Chinese equipment as they develop their digital infrastructures, offering financial assistance to use Washington-approved alternatives. This led to Brazil’s top four telecom companies refusing to meet with a senior U.S. official advocating for exclusion of Huawei from the Brazilian 5G market. In their home country of China, Huawei and other tech companies are grumbling about Nvidia’s acquisition of U.K. chip designer Arm (the deal is still awaiting regulatory approval).
Across the world today, people are using smartphones made in China, and have personal information scattered around various data centers in India or the Philippines, via hosted service providers and call centers. Data is now fluid, mobile and global – that genie is out of the bottle and embargos against specific companies’ or countries’ technologies will ultimately have limited impact from a security perspective.
A false sense of security
Techno-nationalism is fueled by a complex web of justified economic, political and national security concerns. Countries engaging in “protectionist” practices essentially ban or embargo specific technologies, companies, or digital platforms under the banner of national security, but we are seeing it used more often to send geopolitical messages, punish adversary countries, and/or prop up domestic industries.
Blanket bans give us a false sense of security. At the same time, when any hardware or software supplier is embedded within critical infrastructure – or on almost every citizen’s phone – we absolutely need to recognize the risk.
We need to take seriously the concern that their kit could contain backdoors that could allow that supplier to be privy to sensitive data or facilitate a broader cyberattack. Or, as is the lingering case with TikTok, the concern is whether the collection of data on U.S. citizens via an entertainment app could be forcibly seized under Chinese law and enable state-backed cyber actors to then target and track federal employees or conduct corporate espionage.
We cannot ignore that nation states around the world are increasingly turning to cyber operations to gather intelligence, wield influence, and disrupt their adversaries. But we must remember that technology made by those close to home, in proximity or ideology, does not put them out of reach of compromise or automatically make it more secure.
Digital deception and trust
Trust alone is never a sound security strategy. To echo the words of former U.S. President Reagan (who was, appropriately enough, quoting a Russian proverb): “Trust, but verify.” In cybersecurity, “verify” means not blindly trusting the technology you are leveraging, but instead taking the actions needed to monitor and audit in real-time.
Trust is a tool itself that attackers commonly employ in methods of digital deception. Indeed, spoofed login pages from reputable SaaS platforms have been used as a means of harvesting compromised credentials from unwitting victims.
Regardless of whether a cloud provider is based in the U.S., China, or elsewhere, attackers will still seek creative means to exploit both the vulnerabilities in these technologies and the ever-present threat of human error. For example, foreign actors will attempt to infiltrate the supply chains of hardware or software tools, sometimes by simply paying an insider to do the job for them.
In other words: purchasing decisions rooted in techno-nationalism, or, conversely, techno-globalism, are both essentially susceptible to the same security threats. And so, when we target a specific company or technology, rather than critically evaluate our underlying security strategy and defensive technologies, we do not actually strengthen our security posture, but instead chase a red herring.
National security is about much more than blanket bans on specific organizations and technologies. Rather, it is about cybersecurity and operations resilience against the ever-present reality of threats in cyber space—crucially, regardless of where the attacks come from or what technology attackers are targeting.
Building resilience moving forward
Nowadays, cyber-attacks are advancing at a rate that outpaces attempts to define indicators of threat in advance. The strength of any cyber-security stance accordingly lies in its ability to understand and maintain normal conditions internally, not in its attempts to predict the nature of future external threats. This truth holds regardless of whether the threat actor is motivated by financial, strategic or political concerns.
The focus on individual companies distracts from the realities of cyber-defense. Rather than decreasing or restricting the technology ecosystem, national security concerns can actually be advanced by gaining further visibility into critical digital environments. By gaining an in-depth understanding of these environments, we can manage risk in our complex landscape.
Historically, this level and scale of understanding into the ever-growing complexity of digital environments would have been at the limits of a human security team, more likely beyond. However, it is not beyond those teams leveraging AI and machine learning. These technologies excel at achieving a comprehensive and granular understanding of the behaviors and technologies that comprise a technology ecosystem.
Today’s techno-nationalism is taking off and will probably continue to do so because it is responding to real issues in a very observable way, even though it is ultimately ineffective. And so, the stakes remain high. Hidden backdoors in component parts and supply side technologies are used as an entry point for foreign malicious actors. Once attackers gain entry, economic espionage can lead to incalculable financial damage. Further, disrupted critical national infrastructure, such as power grids and gas lines, can lead to devastating costs for a nation.
The persistence of these threats calls for a practical response. Techno-nationalism, though rising in popularity, simply does not rise to the greater security challenge. Rather than blocking access to foreign technologies in a great game of whack-a-mole, national security can actually be advanced by implementing AI enabled digital understanding. The rigid scrutiny and real time attack disruption achieved by AI’s wholistic approach provides robust cyber defense across the full range of technologies that can be implemented, regardless of the attack’s origin.
2020 was a “transformative” year, a year of adaptability and tackling new challenges. As we worked with organizations to deploy mission-critical data security, cryptography was comparatively stable. What cryptographic trends will gain traction in 2021?
The cloud will play a bigger role, especially in financial services
The movement toward broad acceptance of cloud-based encryption and key management will accelerate as more of the pieces come together. Organizations have become more aggressive with the cloud, especially financial services organizations that are moving toward payment processing in the cloud.
Cloud providers are offering more robust and flexible security to meet the demands of organizations who want to retain control of the keys and avoid being vendor locked. Cloud providers have been listening to enterprises about their concerns around data security practices and are making forward strides with data access, key management, and data retention policies.
Homomorphic encryption will be part of your vocabulary
Homomorphic encryption allows for data to remain encrypted while it is being processed and manipulated. Homomorphic encryption could be used to secure data stored in the cloud or in transit. This gives organizations the ability to use data — such as doing analytics on your customer base — without compromising the integrity of the data as a whole.
BYOE adoption will increase
Bring Your Own Encryption (BYOE) will increase. BYOE is the next evolution of organizations being able to determine the level of control they want when it comes to managing their data security policies.
For example, what happens if an organization gets subpoenaed and its cloud provider turns its files to the authorities? If the organization controlled its keys and could do client-side encryption on-premises, the data would be useless. There will likely be a big catalyst event whereby a company goes, “Whoa — what do you mean, a third party can release my information over to a legal authority?”
Encryption + key management, critical with shorter certificate lifecycles
Organizations need both encryption and key management to be tighter than ever. As the industry moves to one-year certificates, organizations are managing shorter digital certificate schedules. It’s ever important to keep track of expiration dates and automation will play a big role.
To improve their security postures, organizations will emphasize bringing key management up to the same level as their encryption programs. What happens if you have deployed good policies, you deployed good encryption, but you deployed poor key management?
Cryptography will be significant in DevSecOps, especially for code signing
Getting tools that DevOps needs to secure its infrastructure — without slowing it down — will be critical. Looking at key management, hardware security modules (HSMs), crypto, and third-party monitoring tools, organizations will emphasize giving DevOps teams what they need to integrate security and quickly identify and troubleshoot trouble areas.
The goal will be to take away the pain points while expanding the use of encryption within the organization. When it comes to code signing, HSMs play a critical role. Code signing certificates, secure key generation, and certificate storage should be centralized and automated, natively integrating with CI/CD systems.
Manufacturers of long-term devices to embrace crypto agility
There has been a lot of talk in 2020 about quantum computers breaking current cryptography. In 2021, manufacturers of devices — satellites, cars, weapons, medical devices — that will be used for 10 to 20 years, will be smart to embrace quantum-safe cryptography. A crypto-agile solution could entail implementing hybrid certificates: signing them with conventional asymmetric encryption now but incorporating enough flexibility so they will transition smoothly to counteract the quantum computing threat when the time comes.
Whether it’s the cloud and organizations retaining control of the keys, BYOE and homomorphic encryption, DevSecOps embracing cryptography, or hybrid certificates for crypto agility, two themes stand out:
- Encryption and key management: you can’t have one without the other
- Shorter certificate lifecycles require more attention to key management than ever
We’re in for an exciting year ahead!
Only 37% of organizations definitely have the skills and technology to keep pace with digital projects during the COVID-19 pandemic, a MuleSoft survey reveals.
Access to data is paramount
82% of line of business (LoB) employees believe they need quick and easy access to data, IT systems, and applications to do their jobs effectively and remain productive.
Access to data is critical as 59% of LoB employees are involved in identifying, suggesting, or creating new ways to improve the delivery of digital services externally, such as building an online self-service portal or a customer-facing mobile application. Yet 29% think their organization is very effective in connecting and using data from multiple sources to drive business value.
“This research shows data is one of the most critical assets that businesses need to move fast and thrive into the future. Organizations need to empower every employee to unlock and integrate data — no matter where it resides — to deliver critical, time-sensitive projects and innovation at scale, while making products and services more connected than ever.”
Data silos increasingly slow down digital initiatives
According to McKinsey, businesses that once mapped digital strategy in one- to three-year phases must now scale their initiatives in a matter of days or weeks.
This report also sheds light on the pandemic leading to an increase in digital initiatives by an average 11-23%, highlighting what is hampering the pace of business and the ability to meet customer expectations:
- Data silos: 33% of LoB employees say the COVID-19 pandemic has revealed a lack of connectivity between existing IT systems, applications, and data as an inefficiency when it comes to digital delivery.
- A lack of digital skills: 29% LoB employees say a lack of digital skills across the business is also an inefficiency when delivering digital projects.
Already stretched IT teams can’t deliver projects quickly enough: 51% of LoB employees are currently frustrated by the speed at which their IT team can deliver digital projects.
Integration challenges directly impact revenue and customer experiences
In light of increasing operational inefficiencies, it is not surprising that 54% of LoB respondents say they are frustrated by the challenge of connecting different IT systems, applications, and data at their organization. Many view this weakness as a threat to their business and the ability to provide connected customer experiences.
- Siloed systems and data slow down business growth: LoB employees are well aware of the repercussions of failing to connect systems, applications, and data. 59% agree that failure in this area will hinder business growth and revenue.
- Behind disconnected experiences are disconnected systems, applications, and data: 59% of LoB employees agree that an inability to connect systems, applications, and data will negatively impact customer experience — a fundamental prerequisite for business success today.
- Automation initiatives require integration: 60% respondents admit that failure to connect systems, applications, and data will also hinder automation initiatives. This comes at a time when a growing number of organizations are looking to automate business processes via capabilities, such as robotic process automation (RPA).
Organizations need to move faster
As demands for digital initiatives grow, organizations across industries need to move faster than ever before. Business users are frustrated by data silos, slowing their ability to meet customer demand and innovate in today’s all-digital, work-from-anywhere world.
The report highlights the need to democratize these capabilities by giving business users the tools they need to easily and quickly unlock data, connect applications, and automate processes.
- Organizations need to scale innovation beyond the four walls of IT: 58% of LoB employees think IT leaders are spending more of their time “keeping the lights on” rather than supporting innovation. Furthermore, 44% go as far as to say they think their organization’s IT department is a blocker on innovation. By using a self-serve model that empowers everyone to unlock data, IT can enable innovation everywhere — in a way that’s governed but not gated by IT. IT can then be freed up from tactical integrations and maintenance to focus more on innovating and delivering high impact projects.
- Partnership with IT will be key to driving innovation: 68% of respondents think that IT and LoB employees should come together to jointly drive innovation in their organization.
- LoB employees need easy access to data to go faster: 80% of respondents think it would be beneficial to their organization if data and IT capabilities were discoverable and pre-packaged building blocks, which allow LoB employees to start creating digital solutions and deliver digital projects for themselves.
Third-party SaaS apps (and extensions) can significantly extend the functionality and capabilities of an organization’s public cloud environment, but they can also introduce security concerns. Many have permission to read, write, and delete sensitive data, which can have a tremendous impact on security, business, and compliance risk.
Assessing the risk of these applications is the key to maintaining a balance between safety and productivity. How can organizations take advantage of these apps’ convenience while also maintaining a secure environment?
Understanding the risk
In an ideal world, each potential application or extension would be thoroughly evaluated before it is introduced into the environment. However, with most employees still working remotely and administrators having limited control over their online activity, reducing the risk of potential data loss is just as important after the fact. In most cases, the threats from third-party applications from two different perspectives:
- The third-party application may try to leak your data or contain malicious code
- The application may be legitimate but be poorly written, leading to security gaps – poorly coded applications can introduce vulnerabilities that lead to data compromise
Google takes no responsibility for the safety of the applications on Marketplace, so any third-party app or extension downloaded by your employees becomes your organization’s express responsibility.
Application security best practices
While Google has a screening process for developers, users are solely responsible for compromised or lost data. Businesses must take hard and fast ownership of screening third-party apps for security best practices. What are the best practices that Google outlines for third-party application security?
- Properly evaluate the vendor or application
- Screen gadgets and contextual gadgets carefully
Google notes that you should evaluate all vendors and applications before using them in your Google Workspace environment. To analyze whether or not a vendor or application is acceptable to use from a Google Workspace security perspective before you install the application:
- Look at reviews left by customers who have downloaded and installed the third-party application. Reviews are listed for all Google Workspace Marketplace apps
- Contact the third-party application vendor directly regarding grey areas that may be questionable
The process of analyzing hundreds of applications across a large environment can create a situation that’s nearly impossible to manage. Administrators need a solution that can allow them to see all the apps on their environment in one place and assess the riskiness of each, allowing them to easily take action on those with the most vulnerabilities.
Employee risk factors
Beyond the typical concern of unsanctioned app downloads, other security issues can occur in conjunction with employee actions.
- Sensitive data transfer – an employee installs an app that connects to the Google Workspace environment and starts migrating sensitive data from a corporate account to their personal private cloud storage account. This commonly happens when an employee decides to leave a company.
- Employee termination – When a company fires an employee, IT admins usually suspend the user account. When you suspend a Google Workspace account, all the apps still have access to sensitive data accessible by the user. This can potentially lead to a data breach.
- Compromised third-party apps – An app can be hacked by cybercriminals. Developers may not be able to quickly identify the breach before the attackers start downloading or migrating an abnormal amount of data or change the scope of permissions, which constitutes strange behavior.
As you can see, the risk of downloading external apps extends even beyond an employee’s tenure at the organization.
Automated security vs. manual analysis
The number of threats, variants, complexities, hybrid networks, BYOD, and many other factors makes it nearly impossible for organizations to rely on manual efforts for adequate security. Computers are simply more effective and efficient at parsing logs and correlating activities.
Humans tend to be much less detail-oriented when it comes to repetitive, monotonous tasks such as crunching numbers and examining data. Additionally, computers don’t get fatigued and can work on an ongoing basis.
Machine learning takes advantage of technology and leverages complex mathematical algorithms to learn about an environment and linked applications and recognize deviations from “normal.”
Finding a security solution powered by machine learning that includes an application assessment component is the best way for administrators to protect their cloud environments from third-party threats effectively.
What began as a two-week remote working environment, due to COVID-19 has now stretched past the nine-month mark for many. The impact of telework on organizations can be felt across departments, including IT and security, which drove the almost overnight digital transformation that swept across the globe.
While organizations across various sectors were faced with the challenge of maximizing their telework posture, those in government services had the extra burden of supporting employees who needed remote access to classified information.
The technology investments spurred by the pandemic also left organizations open to new and increasing threats, with KPMG reporting that “more than four in ten (41 percent) of organizations have experienced increased [cybersecurity] incidents mainly from spear phishing and malware attacks.”
So, while organizations have always been encouraged to evaluate their security posture, patch their VPNs, and prioritize Zero Trust architectures, the pandemic forced them to accelerate the adoption of these measures and evaluate their security posture more seriously. In fact, KPMG also found that most CIOs believe the pandemic has permanently accelerated digital transformation and the adoption of emergent technologies.
By observation, this digital transformation and security transition has happened in what can be defined as three stages, originating when the pandemic first hit in March, spanning through the rest of 2020 and into 2021.
Stage 1 – Acclimating employees to their new remote workspace
Many organizations had to figure out how to increase capacity for critical technologies like VPN. While large consulting firms and IT services companies generally had the technology and procedures in place to make the transition, government and financial institutions were much further behind. With both industries operating in an environment not conducive to telework pre-pandemic, IT leaders had to onboard large amounts of employees onto the VPN network – in some cases going from 10,000 employees on a VPN to 150,000.
Updating technology to accommodate that scale is no easy feat and other hurdles like supply chain issues – e.g., technology coming from foreign nations that were already in lockdown – presented unexpected obstacles. Lessons learned from this pertain to having a disaster and response plan as well as understanding that you might have to build in more time to effectively solve these types of issues.
Stage 2 – Investing in new tech
Once companies could better support their remote workforce, they needed to further understand the additional controls needed to continue providing a secure remote work infrastructure in the long term. In response to this need, there were significant spikes (as much as 80% according to Okta) in the usage of tools like multi-factor authentication as organizations began to rethink the way employees should access networks.
There has also been an increase in DNS being added to the roster of “easy to implement” security tech geared towards a distributed workforce.
Stage 3 – Developing a permanent remote IT infrastructure
As organizations currently undergo planning and budget allocation for 2021, they are looking to invest in more permanent solutions. IT teams are trying to understand how they can best invest in solutions that will ensure a strong security posture.
There’s also a greater importance in starting to understand the greater need for complete visibility into the endpoint, even as devices are operating on remote networks. Policies are being created around how much work should actually be done on a VPN and by default creating more forward-looking permanent policies and technology solutions.
But as security teams embrace new tools for security and operations to enable continuity efforts, it also generates new attack vectors. COVID-19 has presented the opportunity for the IT community to evaluate what can and can’t be trusted, even when operating under Zero Trust architectures. For example, some of the technologies, like VPN, can undermine what they were designed for.
At the beginning of the pandemic, CISA issued a warning around the continued exploitation of specific VPN vulnerabilities. CISA conducted multiple incident response engagements at U.S. government and commercial entities where malicious cyber threat actors have exploited CVE-2019-11510—an arbitrary file reading vulnerability affecting VPN appliances—to gain access to victim networks.
Although the VPN provider released patches for CVE-2019-11510 in April 2019, CISA observed incidents where compromised Active Directory credentials were used months after the victim organization patched their VPN appliance.
This exploitation was a textbook example of cybercriminals adapting their attack methodologies to the increased use and scale of new technologies for remote workers. This concentrated adversarial effort caused security teams to reevaluate the tools they have put into place, and the scale at which they have done so. The four areas that security teams are putting a critical focus on include:
- The best process for reducing remote access to sensitive data
- The identification gap between commercial and classified data
- The security of collaboration tools across an organization
- Visibility of endpoints, even when they’re not on my network
At the end of the day, security is a journey, not a destination – what might have worked prior to the pandemic needed to best suit the evolving threat environment. But just because you have a security solution in place, doesn’t mean that won’t become your next exploitation. It’s imperative for security teams to continuously advise their organizations on the changing threat landscape, always looking to stay one step ahead of the attacker.
As organizations grapple with stage three of addressing their security posture, they must get inside the mindset of today’s cybercriminals who are working around the clock to maliciously exploit new technologies and workflows implemented by companies today.
In this interview, Matt Cooke, cybersecurity strategist, EMEA at Proofpoint, discusses the cybersecurity challenges for retail organizations and the main areas CISOs need to focus on.
Generally, are retailers paying enough attention to security hygiene?
Our research has shown that the vast majority of retailers in the UK and Europe-wide simply aren’t doing enough to protect their customers from fraudulent and malicious emails – only 11% of UK retailers have implemented the recommended and strictest level of DMARC protection, which protects them from cybercriminals spoofing their identity and decreases the risk of email fraud for customers.
Despite this low and worrying statistic, it’s promising to see that a small majority of UK retailers have at least started their DMARC journey – with 53% publishing a DMARC record in general. When we look at the top European-wide online retailers, 60% of them have published a DMARC record.
If we compare this to the largest organisations in the world (the Global 2000), only 51% of these brands have published a DMARC record. This illustrates the retail industry is slightly ahead of the curve – therefore certainly is paying attention to security hygiene – but there’s still a long way to go.
Unfortunately, starting your DMARC journey isn’t quite enough – without having the ‘reject’ policy in place cyber criminals can still pretend to be you and trick your customers.
What areas should a CISO of a retail organization be particularly worried about?
Business Email Compromise (BEC) and Email Account Compromise Attacks (EAC), are on the rise, targeting organisations in all industries globally. Dubbed cyber-security’s priciest problem, social engineering driven cyber threats such as BEC and EAC are purpose-built to impersonate someone users trust and trick them into sending money or sensitive information.
These email-based threats are a growing problem. Recent Proofpoint research has shown that since March 2020, over 7,000 CEOs or other executives have been impersonated. Overall, more money is lost to this type of attack than any other cybercriminal activity. In fact, according to the FBI, these attacks have cost organisations worldwide more than $26 billion between June 2016 and July 2019.
The retail industry has a very complex supply chain. When targeting an organisation in this sector, cyber criminals don’t only see success from tricking consumers/customers, they can also target suppliers, with attacks such as BEC, impersonating a trusted person from within the business.
We have seen cases within the retail sector where cyber criminals are compromising suppliers’ email accounts in order to hijack seemingly legitimate conversations with someone within the retail business. The aim here is to trick the retailer into paying an outstanding invoice into the wrong account – the cybercriminals’ account, as opposed to the actual supplier.
In addition, due to the pandemic, global workforces have been thrusted into remote working – and those in the retail sector are not exempt. As physical stores have closed worldwide, customer service and interaction has shifted to digital communication more so than ever. Those employees that were used to talking directly to customers, are now using online platforms and have new cloud accounts – expanding the attack surface for cybercriminals.
The retail industry – along with all other industries – need to ensure employees are adequately trained around identifying the risks that might be delivered by these different communication channels and how to securely handle customer data.
Domain spoofing and phishing continue to rise, what’s the impact for retail organizations?
Threat actors are constantly tailoring their tactics, yet email remains the cybercriminals’ attack vector of choice, both at scale and in targeted attacks, simply because it works.
Cybercriminals use phishing because it’s easy, cheap and effective. Email addresses are easy to obtain, and emails are virtually free to send. With little effort and little cost, attackers can quickly gain access to valuable data. As seen in recent breaches, emails sent from official addresses that use the domains of known international companies, seem trustworthy both to the receiver and spam-filters, increasing the number of potential victims. However, this has a detrimental effect on both the brands’ finances and reputation.
Organisations have a duty to deploy authentication protocols, such as DMARC to protect employees, customers, and partners from cybercriminals looking to impersonate their trusted brand and damage their reputation.
Opportunistic cyber criminals will tailor their emails to adapt to whatever is topical or newsworthy at that moment in time. For example, Black Friday-themed phishing emails often take advantage of recipients’ desire to cash in on increasingly attractive deals, creating tempting clickbait for users.
These messages may use stolen branding and tantalising subject lines to convince users to click through, at which point they are often delivered to pages filled with advertising, potential phishing sites, malicious content, or offers for counterfeit goods. As with most things, if offers appear too good to be true or cannot be verified as legitimate email marketing from known brands, recipients should avoid following links.
Do you expect technologies like AI and ML to help retailers eliminate most security risks in the near future?
Today, AI is a vital line of defence against a wide range of threats, including people-centric attacks such as phishing. Every phishing email leaves behind it a trail of data. This data can be collected and analysed by machine learning algorithms to calculate the risk of potentially harmful emails by checking for known malicious hallmarks.
While AI and ML certainly help organisations to reduce risks, they are not going to eliminate security risks on their own. Organisations need to build the right technologies and plug the right gaps from a security perspective, using AI and ML as just part of this overall solution.
Organisations should not outsource their risk management entirely to an AI engine, because AI doesn’t know your business.
There is no doubt that artificial intelligence is now a hugely important line of cyber defence. But it cannot and should not replace all previous techniques. Instead, we must add it to an increasingly sophisticated toolkit, designed to protect against rapidly evolving threats.
The AI in cybersecurity market is projected to generate a revenue of $101.8 billion in 2030, increasing from $8.6 billion in 2019, progressing at a 25.7% CAGR during 2020-2030, ResearchAndMarkets reveals.
The market is categorized into threat intelligence, fraud detection/anti-fraud, security and vulnerability management, data loss prevention (DLP), identity and access management, intrusion detection/prevention system, antivirus/antimalware, unified threat management, and risk & compliance management, on the basis of application. The DLP category is expected to advance at the fastest pace during the forecast period.
Malicious attacks and cyber frauds growing rapidly
The number of malicious attacks and cyber frauds have risen considerably across the globe, which can be attributed to the surging penetration on internet and increasing utilization of cloud solutions.
Cyber fraud, including payment and identity card theft, account for more than 55% of all cybercrime and lead to major losses for organizations, if they are not mitigated. Owing to this, businesses these days are adopting advanced solutions for dealing with cybercrime in an efficient way.
This is further resulting in the growth of the global AI in cybersecurity market. AI-based solutions are capable of combating cyber frauds by reducing response time, identifying threats, refining techniques for distinguishing attacks that need immediate attention.
The number of cyber-attacks has also been growing because of the surging adoption of the BYOD policy all over the world. It has been observed that the policy aids in increasing productivity and further enhances employee satisfaction.
That being said, it also makes important company information and data vulnerable to cyber-attacks. Devices of employees have wide-ranging capabilities and IT departments are often not able to fully quality, evaluate, and approve each and every devices, which can pose high security threat to confidential data.
DLP systems utilized for enforcing data security policies
AI provides advanced protection via the machine learning technology, and hence offers complete endpoint security. The utilization of AI can efficiently aid in mitigating security threats and preventing attacks.
DLP plays a significant role in monitoring, identifying, and protecting the data in storage and in motion over the network. Certain specific data security policies are formulated in each organization and it is mandatory for the IT personnel to strictly follow them.
DLP systems are majorly utilized for enforcing data security policies in order to prevent unauthorized usage or access to confidential data. The fraud detection/anti-fraud category accounted for the major share of the market in 2019 and is predicted to dominate the market during the forecast period as well.
The AI in cybersecurity market by region
Geographically, the AI in cybersecurity market was led by North America in 2019, as stated by a the publisher report. A large number of companies are deploying cybersecurity solutions in the region, owing to the surging number of cyber-attacks.
Moreover, the presence of established players and high digitization rate are also leading to the growth of regional domain. The Asia-Pacific region is expected to progress at the fastest pace during the forecast period.
In conclusion, the market is growing due to increasing cybercrime across the globe and rising adoption of the BYOD policy.
Designed to ensure that all companies securely transmit, store or process payment card data correctly, compliance to the Payment Card Industry Data Security Standard (PCI DSS) serves a critical purpose.
Failure to comply increases the risk of a data breach, which can lead to potential losses of revenue, customers, brand reputation and customer trust. Despite this risk, the 2020 Verizon Payment Security Report found that only 27.9% of global organizations maintained full PCI DSS compliance in 2019, marking the third straight year that PCI DSS compliance has declined.
In addition to the continued decline in compliance, the current iteration of PCI DSS (3.2.1) is expected to be replaced by PCI DSS 4.0 in mid-2021, with an extended transition period.
But as we enter the busiest shopping season of the year, in the midst of a global pandemic that has upended business practices, organizations cannot risk ignoring compliance to the existing PCI DSS 3.2.1 standard. Failure to achieve and maintain compliance creates gaps in securing sensitive cardholder data, making easy targets for cyber criminals. And with the holiday season historically known for rises in cyber-attacks, organizations that fail to stay focused on compliance will represent the highest risk amongst any organization that handles card data.
So, what do organizations need to know about PCI DSS 4.0 and how can they proactively prepare for this update?
Rising risks and what’s new
The financial services industry has always been a prime target for hackers and malicious actors. Last year alone, the Federal Trade Commission received over 271,000 reports of credit card fraud in the United States. As consumers continue to prefer online payments and debit and credit card transactions, the prevalence of card fraud will continue to rise.
The core principle of the PCI DSS is to protect cardholder data, and with PCI DSS 4.0, it will continue to serve as the critical foundation for securing payment card data. As the industry leader in payment card security, the Payment Card Industry Security Standards Council (PCI SSC) will continue evaluating how to evolve the standard to accommodate changes in technology, risk mitigation techniques, and the threat landscape.
Additionally, the PCI SSC is looking at ways to introduce greater flexibility to payment card security and compliance, in order to support organizations using a broad range of controls and methods to meet security objectives.
Overall, PCI DSS 4.0 will set out to:
- Ensure PCI DSS continues to meet the security needs of the payments industry
- Add flexibility and support of additional methodologies to achieve security
- Promote security as a continuous process
- Enhance validation methods and procedures
As consumers and organizations continue to interact and conduct more business online, the need for enforcement of the PCI DSS regulations will continue to become apparent.
Consumers are sharing Personally Identifiable Information (PII) with every transaction, and as that information is shared across networks, consumers require organizations to provide assurance that they are handling such data in a secure manner.
Once implemented, PCI DSS 4.0 will place a greater emphasis on security as a continuous process with the goal of promoting fluid data management practices that integrate with an organization’s overall security and compliance posture.
While PCI DSS 4.0 continues to undergo industry consultation prior to its final release, potential changes for organizations to keep in mind include:
- Authentication, specific consideration for the NIST MFA/password guidance
- Broader applicability for encrypting cardholder data on trusted networks
- Monitoring requirements to consider technology advancement
- Greater frequency of testing of critical controls – for example, incorporating some requirements from the Designated Entities Supplemental Validation (PCI DSS Appendix A3) into regular PCI DSS requirements
The second request for comment (RFC) period is still ongoing, it is expected that PCI DSS 4.0 will become available in mid-2021. To accommodate the budgetary and organizational changes necessary to achieve compliance, an extended transition period of 18 months and an enforcement date will be set by the PCI SSC after PCI DSS 4.0 has been published.
Making good use of this time will be critical, so organizations should develop a thorough implementation plan that updates reporting templates and forms, and any ongoing monitoring and recurring compliance validation to meet the updated requirements.
Tips for achieving PCI DSS compliance
The best piece of advice is to first ensure full compliance with the current version of the standard. This will ensure a solid baseline to work from when planning for future updates to PCI DSS. When the regulation takes effect in 2021, organizations can begin internal assessment and preparation of their network for any new requirements.
PCI DSS is already known as being one of the most detailed and prescriptive data security standards to date, and version 4.0 is expected to be even more comprehensive than its predecessor.
With millions of transactions occurring each day, organizations are already collecting, sharing and storing massive amounts of consumer data that they must protect. Even for organizations currently in compliance with PCI DSS 3.2.1, it is critical to establish a holistic view of their data management strategies to assess potential lapses, gaps and threats. To achieve this holistic view and ensure readiness for version 4.0, organizations should take the following steps:
- Conduct a data discovery sweep – By conducting a thorough data discovery sweep of all data storage across the entire network, organizations can eliminate assumptions from their data management practices. Data discovery provides organizations with greater visibility in the strengths and vulnerabilities of the network as well as a better sense of how PII flows through all repositories including structured data, unstructured data, on premise storage and cloud storage, to ensure proper data management techniques.
- Enact strategies that promote smart data decisions – Once an organization understands how data flows through its environment and where it’s located, they can use these fact-based insights to enact policies and strategies that prioritize data privacy. Data privacy depends on employees, so organizations must take the time to educate employees on the role they play in organizational security. This includes training and continued network data audits to ensure no customer data slips through the cracks or is forgotten.
- Appoint a leader to drive compliance – With the average organization already adhering to 13 different compliance regulations, compliance can be overwhelming. Organizations should look to appoint a security compliance officer or internal lead to oversee ongoing compliance initiatives. This person should seek to become an expert in PCI DSS, generally including progress towards 4.0 and all other forms of compliance. Furthermore, they can become the go-to person on ensuring proper data management practices.
It’s been nearly 15 years since PCI DSS was first released, and since then, consumers and businesses have substantially increased the amount of transactions and business activities conducted online using payment cards. For this reason, the importance of the PCI DSS remains just as critical for securing data as it ever was.
The organizations that leverage the PCI DSS as a baseline to achieve ongoing awareness on the security of their data and look for proactive ways to secure their networks will be the most successful moving forward, gaining consumer and employee trust through their compliance actions.
Cohesity announced the results of a survey of 500 IT decision makers in the United States that highlights critical IT and data management challenges midsize and enterprise organizations are facing as companies prepare for 2021.
The survey included 250 respondents from midsize companies ($100M-$1B in revenue) and 250 from enterprise organizations ($1B+ in revenue).
Some of these challenges came to light as companies answered questions about their appetite for Data Management as a Service (DMaaS). With a DMaaS solution, organizations do not have to manage data infrastructure – it is managed for them.
DMaaS provides organizations with easy access to backup and recovery, disaster recovery, archiving, file and object services, dev/test provisioning, data governance, and security – all through one vendor in a Software as a Service (SaaS) model.
IT budgets are being slashed: Seventy percent of respondents state their organization is being forced to cut the IT budget in the next 12 months. Around a third of respondents have to cut the IT budget by 10-25 percent, a tenth have to cut it by a whopping 25-50 percent.
Verticals facing the largest cuts on average: technology (20 percent), education (18 percent), government/public sector (16 percent).
Many midsize companies are struggling to compete against larger enterprises because of inefficient data management: 27 percent of respondents from midsize companies say they have lost 25-50 percent of deals to larger enterprises because larger enterprises have more resources to manage and derive value from their data.
Even worse, 18 percent of respondents from midsize companies claim to have lost 50-75 percent of deals to larger enterprises for the same reason.
Organizations are spending inordinate amounts of time managing data infrastructure: Respondents say IT teams, on average, spend 40 percent of their time each week installing, maintaining, and managing data infrastructure. Twenty-two percent claim their IT team spends 50-75 percent of time each week on these tasks.
Technology is needed that makes it easier to derive value from data while also reducing stress levels and employee turnover: When respondents were asked about the benefits of deploying a DMaaS solution versus spending so much time managing data infrastructure, 61 percent cited an ability to focus more on deriving value from data which could help their organization’s bottom line, 52 percent cited reduced stress levels for IT teams, and 47 percent are hopeful this type of solution could also reduce employee turnover within the IT team.
“Research shows IT leaders are anxious for comprehensive solutions that will enable them to do more with data in ways that will help boost revenues and provide a competitive advantage at a time when they are also facing budget cuts, burnout, and turnover.”
The growing appetite for technology that simplifies IT and data management
As businesses look to simplify IT operations, be more cost efficient, and do more with data, respondents are very optimistic about the benefits of DMaaS, which include:
- Cost predictability: Eighty-nine percent of respondents say their organization is likely to consider deploying a DMaaS solution, at least in part, due to budget cuts.
- Helping midsize companies win more business: Ninety-one percent of respondents from midsize companies believe deploying a DMaaS solution will enable their organizations to compete more effectively against larger enterprises that have more resources to manage data.
- Saving IT teams valuable time: Respondents who noted that their IT teams spend time each week managing IT infrastructure believe those teams will save, on average, 39 percent of their time each week if their company had a full DMaaS solution in place.
- Doing more with data: Ninety-seven percent of respondents believe DMaaS unlocks opportunities to derive more value from data using cloud-based services and applications. Sixty-four percent want to take advantage of cloud-based capabilities that enable them to access and improve their security posture, including improving anti-ransomware capabilities.
- Alleviating stress and reducing turnover: Ninety-three percent of respondents believe that deploying a DMaaS solution would enable them to focus less on infrastructure provisioning and data management tasks. 52 percent of these respondents say deploying a DMaaS solution could reduce their team’s stress levels by not having to spend so much time on infrastructure provisioning and management. Forty-seven percent believe deploying a DMaaS solution could reduce employee turnover within the IT team.
Choice is the name of the game for IT in 2021
“The data also pinpoints another important IT trend in 2021: choice is critical,” said Waxman. “IT leaders want to manage data as they see fit.” With respect to choice, respondents stated:
- It’s not one or the other, it’s both: 69 percent of respondents stated their organization prefers to partner with vendors that offer choice in how their company’s data is managed and will not consider vendors that just offer a DMaaS model — they also want the option to manage some data directly.
- Avoiding one-trick ponies is key: Ninety-four percent of survey respondents stated that it’s important to work with a DMaaS vendor that does more than Backup as a Service (BaaS). If the vendor only offers BaaS, 70 percent are concerned they will have to work with more vendors to manage their data and doing so is likely to increase their workload (77 percent), fail to help reduce costs (65 percent), and lead to mass data fragmentation where data is siloed and hard to manage and gain insights from (74 percent).
To stay connected with patients, healthcare providers are turning to telehealth services. In fact, 34.5 million telehealth services were delivered from March through June, according to the Centers for Medicare and Medicaid Services. The shift to remote healthcare has also impacted the roll out of new regulations that would give patients secure and free access to their health data.
The shift to online services shines a light on a major cybersecurity issue within all industries (but especially healthcare where people have zero control over their data): consent.
Hand over data control
Data transparency allows people to know what personal data has been collected, what data an organization wants to collect and how it will be used. Data control provides the end-user with choice and authority over what is collected and even where it is shared. Together the two lead to a competitive edge, as 85% of consumers say they will take their business elsewhere if they do not trust how a company is handling their data.
Regulations such as the GDPR and the CCPA have been enacted to hold companies accountable unlike ever before – providing greater protection, transparency and control to consumers over their personal data.
The U.S. Department of Health and Human Services’ (HHS) regulation, which is set to go into effect in early 2021, would provide interoperability, allowing patients to access, share and manage their healthcare data as they do their financial data. Healthcare organizations must provide people with control over their data and where it goes, which in turn strengthens trust.
How to earn patients’ trust
Organizations must improve their ability to earn patients’ confidence and trust by putting comprehensive identity and access management (IAM) systems in place. Such systems need to offer the ability to manage privacy settings, account for data download and deletion, and enable data sharing with not just third-party apps but also other people, such as additional care providers and family members.
The right digital identity solution should empower the orchestration of user identity journeys, such as registration and authentication, in a convenient way that unifies configuring security and user experience choices.
It should also enable the healthcare organization to protect patients’ personal data while offering their end-users a unified means of control of their data consents and permissions. Below are the four key steps companies should take to earn trust when users hand over data control:
- Identify where digital transformation opportunities and user trust risks intersect. Since users are becoming more skeptical, organizations must analyze “trust gaps” while they are discovering clever new ways to leverage personal data.
- Consider personal data as a joint asset. It’s easy for a company to say consumers own their own personal data, but business leaders have incentives to leverage that data for the value it brings to their business. This changes the equation. All the stakeholders within an organization need to come together and view data as a joint asset in which all parties, including end-users, have a stake.
- Lean into consent. Given the realities of regulations, a business often has a choice to offer consent to end-users rather than just collecting and using data. Seek to offer the option – it provides benefits when building trust with skeptical consumers, as well as when proving your right to use that data.
- Take advantage of consumer identity and access management (CIAM) for building trust. Identity management platforms automate and provide visibility into the entire customer journey across many different applications and channels. They also allow end-users to retain the controls to manage their own profiles, passwords, privacy settings and personal data.
Providing data transparency and data control to the end-user enhances the relationship between business and consumer. Organizations can achieve this trust with consumers in a comprehensive fashion by applying consumer identity and access management that scales across all of their applications. To see these benefits before regulations like the HHS regulations go into effect, organizations need to act now.