Businesses around the globe are facing challenges as they try to protect data stored in complex hybrid multi-cloud environments, from the growing threat of ransomware, according to a Veritas Technologies survey.
Only 36% of respondents said their security has kept pace with their IT complexity, underscoring the need for greater use of data protection solutions that can protect against ransomware across the entirety of increasingly heterogenous environments.
Need to pay ransoms
Typically, if businesses fall foul to ransomware and are not able to restore their data from a backup copy of their files, they may look to pay the hackers responsible for the attack to return their information.
The research showed companies with greater complexity in their multi-cloud infrastructure were more likely to make these payments. The mean number of clouds deployed by those organizations who paid a ransom in full was 14.06. This dropped to 12.61 for those who paid only part of the ransom and went as low as 7.22 for businesses who didn’t pay at all.
In fact, only 20% of businesses with fewer than five clouds paid a ransom in full, 44% for those with more than 20. This compares with 57% of the under-fives paying nothing to their hackers and just 17% of the over-20s.
Slow recovery times
Complexity in cloud architectures was also shown to have a significant impact on a business’s ability to recover following a ransomware attack. While 43% of those businesses with fewer than five cloud providers in their infrastructure saw their business operations disrupted by less than one day, only 18% of those with more than 20 were as fast to return to normal.
Moreover, 39% of the over-20s took 5-10 days to get back on track, with just 16% of the under-fives having to wait so long.
Inability to restore data
Furthermore, according to the findings of the research, greater complexity in an organization’s cloud infrastructure, also made it slightly less likely that they would ever be able to restore their data in the event of a ransomware attack.
While 44% of businesses with fewer than five cloud providers were able to restore 90% or more of their data, just 40% of enterprises building their infrastructure on more than 20 cloud services were able to say the same.
John Abel, SVP and CIO at Veritas said: “The benefits of hybrid multi-cloud are increasingly being recognised in businesses around the world. In order to drive the best experience, at the best price, organizations are choosing best-of-breed cloud solutions in their production environments, and the average company today is now using nearly 12 different cloud providers to drive their digital transformation.
“However, our research shows many businesses’ data protection strategies aren’t keeping pace with the levels of complexity they’re introducing and, as a result, they’re feeling the impact of ransomware more acutely.
“In order to insulate themselves from the financial and reputational damage of ransomware, organizations need to look to data protection solutions that can span their increasingly heterogenous infrastructures, no matter how complex they may be.”
Businesses recognize the challenge
The research revealed that many businesses are aware of the challenge they face, with just 36% of respondents believing their security had kept pace with the complexity in their infrastructure.
The top concern as a result of this complexity, as stated by businesses, was the increased risk of external attack, cited by 37% of all participants in the research.
Abel continued: “We’ve heard from our customers that, as part of their response to COVID, they rapidly accelerated their journey to the cloud. Many organizations needed to empower homeworking across a wider portfolio of applications than ever before and, with limited access to their on-premise IT infrastructure, turned to cloud deployments to meet their needs.
“We’re seeing a lag between the high-velocity expansion of the threat surface that comes with increased multi-cloud adoption, and the deployment of data protection solutions needed to secure them. Our research shows some businesses are investing to close that resiliency gap – but unless this is done at greater speed, companies will remain vulnerable.”
Need for investment
46% of businesses shared they had increased their budgets for security since the advent of the COVID-19 pandemic. There was a correlation between this elevated level of investment and the ability to restore data in the wake of an attack: 47% of those spending more since the Coronavirus outbreak were able to restore 90% or more of their data, compared with just 36% of those spending less.
The results suggest there is more to be done though, with the average business being able to restore only 80% of its data.
Back to basics
While the research indicates organizations need to more comprehensively protect data in their complex cloud infrastructures, the survey also highlighted the need to get the basics of data protection right too.
Only 55% of respondents could claim they have offline backups in place, even though those who do are more likely to be able to restore more than 90% of their data. Those with multiple copies of data were also better able to restore the lion’s share of their data.
Forty-nine percent of those with three or more copies of their files were able to restore 90% or more of their information, compared with just 37% of those with only two.
The three most common data protection tools to have been deployed amongst respondents who had avoided paying ransoms were: anti-virus, backup and security monitoring, in that order.
The safest countries to be in to avoid ransomware attacks, the research revealed, were Poland and Hungary. Just 24% of businesses in Poland had been on the receiving end of a ransomware attack, and the average company in Hungary had only experienced 0.52 attacks ever.
The highest incident of attack was in India, where 77% of businesses had succumbed to ransomware, and the average organization had been hit by 5.27 attacks.
Cloud adoption was already strong heading into 2020. According to a study by O’Reilly, 88% of businesses were using the cloud in some form in January 2020. The global pandemic just accelerated the move to SaaS tools. This seismic shift where businesses live day-to-day means a massive amount of business data is making its way into the cloud.
All this data is absolutely critical for core business functions. However, it is all too often mistakenly considered “safe” thanks to blind trust in the SaaS platform. But human error, cyberattacks, platform updates and software integrations can all easily compromise or erase that data … and totally destroy a business.
According to Microsoft, 94% of businesses report security benefits since moving to the cloud. Although there are definitely benefits, data is by no means fully protected – and the threat to cloud data continues to rise, especially as it ends up spread across multiple applications.
Organizations continue to overlook the simple steps they can take to better protect cloud data and their business. In fact, our 2020 Ecommerce Data Protection Survey found that one in four businesses has already experienced data loss that immediately impacted sales and operations.
Cloud data security illusions
Many companies confuse cloud storage with cloud backup. Cloud storage is just that – you’ve stored your data in the cloud. But what if, three years later, you need a record of that data and how it was moved or changed for an audit? What if you are the target of a cyberattack and suddenly your most important data is no longer accessible? What if you or an employee accidentally delete all the files tied to your new product line?
Simply storing data in the cloud does not mean it is fully protected. The ubiquity of cloud services like Box, Dropbox, Microsoft 365, Google G Suite/Drive, etc., has created the illusion that cloud data is protected and easily accessible in the event of a data loss event. Yet even the most trusted providers manage data by following the Shared Responsibility Model.
The same goes for increasingly popular business apps like BigCommerce, GitHub, Shopify, Slack, Trello, QuickBooks Online, Xero, Zendesk and thousands of other SaaS applications. Cloud service providers only fully protect system-level infrastructure and data. So while they ensure reliability and recovery for system-wide failures, the cloud app data of individual businesses is still at risk.
In the current business climate, human errors are even more likely. With the pandemic increasing the amount of remote work, employees are navigating constant distractions tied to health concerns, increasing family needs and an inordinate amount of stress.
Complicating things further, many online tools do not play nicely with each other. APIs and integrations can be a challenge when trying to move or share data between apps. Without a secure backup, one cyberattack, failed integration, faulty update or click of the mouse could wipe out the data a business needs to survive.
While top SaaS platforms continue to expand their security measures, data backup and recovery is missing from the roadmap. Businesses need to take matters into their own hands.
Current cloud backup best practices
In its most rudimentary form, a traditional cloud backup essentially makes a copy of cloud data to support business continuity and disaster recovery initiatives. Proactively protecting cloud data ensures that if that business-critical data is compromised, corrupted, deleted or inaccessible, they still have immediate access to a comprehensive, usable copy of the data needed to avoid business disruption.
From multi-level user access restrictions, password managers and regularly timed manual downloads, there are many basic (even if tedious) ways for businesses to better protect their cloud data. Some companies have invested in building more robust backup solutions to keep their cloud business data safe. However, homegrown backup solutions are costly and time intensive as they require constant updates to keep pace with ever-changing APIs.
In contrast, third-party backup solutions can provide an easier to manage, cost/time-efficient way to protect cloud data. There is a wide range of offerings though – some more reputable and secure than others. Any time business data is entrusted to a third party, reputability and security of that vendor must take center stage. If they have your data, they need to protect it.
Cloud backup providers need to meet stringent security and regulatory requirements so look for explicit details about how they secure your data. As business data continues to move to the cloud, storage limits, increasingly complex integrations and new security concerns will heighten the need for comprehensive cloud data protection.
The trend of business operations moving to the cloud started long before the quarantine. Nevertheless, the cloud storage and security protocols most businesses currently rely on to protect cloud data are woefully insufficient.
Critical business data used to be stored (and secured) in a central location. Companies invested significant resources to manage walls of servers. With SaaS, everything is in the cloud and distributed – apps running your store, your account team, your mailing list, your website, etc. Business data in the backend of each SaaS tool looks very different and isn’t easily transferable.
All the data has become decentralized, and most backups can’t keep pace. It isn’t a matter of “if” a business will one day have a data loss event, it’s “when”. We need to evolve cloud backups into a comprehensive, distributed cloud data protection platform that secures as much business-critical data as possible across various SaaS platforms.
As businesses begin to rethink their approach to data protection in the cloud era, business backups will need to alleviate the worry tied to losing data – even in the cloud. True business data protection means not worrying about whether an online store will be taken out, a third-party app will cause problems, an export is fully up to date, where your data is stored, if it is compliant or if you have all of the information needed to fully (and easily) get apps back up and running in case of an issue.
Delivering cohesive cloud data protection, regardless of which application it lives in, will help businesses break free from backup worry. The next era of cloud data protection needs to let business owners and data security teams sleep easier.
We live in the age of data. We are constantly producing it, analyzing it, figuring out how to store and protect it, and, hopefully, using it to refine business practices. Unfortunately, 58% of organizations make decisions based on outdated data.
While enterprises are rapidly deploying technologies for real-time analytics, machine learning and IoT, they are still utilizing legacy storage solutions that are not designed for such data-intensive workloads.
To select a suitable data storage for your business, you need to think about a variety of factors. We’ve talked to several industry leaders to get their insight on the topic.
Phil Bullinger, SVP and General Manager, Data Center Business Unit, Western Digital
Selecting the right data storage solution for your enterprise requires evaluating and balancing many factors. The most important is aligning the performance and capabilities of the storage system with your critical workloads and their specific bandwidth, application latency and data availability requirements. For example, if your business wants to gain greater insight and value from data through AI, your storage system should be designed to support the accelerated performance and scale requirements of analytics workloads.
Storage systems that maximize the performance potential of solid state drives (SSDs) and the efficiency and scalability of hard disk drives (HDDs) provide the flexibility and configurability to meet a wide range of application workloads.
Your applications should also drive the essential architecture of your storage system, whether directly connected or networked, whether required to store and deliver data as blocks, files, objects or all three, and whether the storage system must efficiently support a wide range of workloads while prioritizing the performance of the most demanding applications.
Consideration should be given to your overall IT data management architecture to support the scalability, data protection, and business continuity assurance required for your enterprise, spanning from core data centers to those distributed at or near the edge and endpoints of your enterprise operations, and integration with your cloud-resident applications, compute and data storage services and resources.
Ben Gitenstein, VP of Product Management, Qumulo
When searching for the right data storage solution to support your organizational needs today and in the future, it’s important to select a solution that is trusted, scalable to secure demanding workloads of any size, and ensures optimal performance of applications and workloads both on premises and in complex, multi- cloud environments.
With the recent pandemic, organizations are digitally transforming faster than ever before, and leveraging the cloud to conduct business. This makes it more important than ever that your storage solution has built in tools for data management across this ecosystem.
When evaluating storage options, be sure to do your homework and ask the right questions. Is it a trusted provider? Would it integrate well within my existing technology infrastructure? Your storage solution should be easy to manage and meet the scale, performance and cloud requirements for any data environment and across multi-cloud environments.
Also, be sure the storage solution gives IT control in how they manage storage capacity needs and delivers real-time insight into analytics and usage patterns so they can make smart storage allocation decisions and maximize an organizations’ storage budget.
David Huskisson, Senior Solutions Manager, Pure Storage
Data backup and disaster recovery features are critically important when selecting a storage solution for your business, as now no organization is immune to ransomware attacks. When systems go down, they need to be recovered as quickly and safely as possibly.
Look for solutions that offer simplicity in management, can ensure backups are viable even when admin credentials are compromised, and can be restored quickly enough to greatly reduce major organizational or financial impact.
Storage solutions that are purpose-built to handle unstructured data are a strong place to start. By definition, unstructured data means unpredictable data that can take any form, size or shape, and can be accessed in any pattern. These capabilities can accelerate small, large, random or sequential data, and consolidate a wide range of workloads on a unified fast file and object storage platform. It should maintain its performance even as the amount of data grows.
If you have an existing backup product, you don’t need to rip and replace it. There are storage platforms with robust integrations that work seamlessly with existing solutions and offer a wide range of data-protection architectures so you can ensure business continuity amid changes.
Tunio Zafer, CEO, pCloud
Bear in mind: your security team needs to assist. Answer these questions to find the right solution: Do you need ‘cold’ storage or cloud storage? If you’re looking to only store files for backup, you need a cloud backup service. If you’re looking to store, edit and share, go for cloud storage. Where are their storage servers located? If your business is located in Europe, the safest choice is a storage service based in Europe.
Client-side encryption means that your data is secured on your device and is transferred already encrypted. What is their support package? At some point, you’re going to need help. A data storage service with a support package that’s included for free, answers in up to 24 hours is preferred.
Ransomware has been noted by many as the most threatening cybersecurity risk for organizations, and it’s easy to see why: in 2019, more than 50 percent of all businesses were hit by a ransomware attack – costing an estimated $11.5 billion. In the last month alone, major consumer corporations, including Canon, Garmin, Konica Minolta and Carnival, have fallen victim to major ransomware attacks, resulting in the payment of millions of dollars in exchange for file access.
While there is a lot of discussion about preventing ransomware from affecting your business, the best practices for recovering from an attack are a little harder to pin down.
While the monetary amounts may be smaller for your organization, the importance of regaining access to the information is just as high. What steps should you take for effective ransomware recovery? A few of our best tips are below.
1. Infection detection
Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment.
Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further.
Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection.
One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. Thus, an AI/ML based approach is recommended, which will look for behaviors such as rapid, successive encryption of files and determine there is an attack happening.
Effective cybersecurity also includes good defensive mechanisms that protect business-critical systems like email. Often ransomware affects organizations by means of a phishing email attack or an email that has a dangerous file attached or hyperlinked.
If organizations are ill-equipped to handle dangerous emails, this can be an easy way for ransomware to make its way inside the walls of your organization’s on-premise environment or within the cloud SaaS environment. With cloud SaaS environments in particular, controlling third-party applications that have access to your cloud environment is extremely important.
2. Contain the damage
After you have detected an active infection, the ransomware process can be isolated and stopped from spreading further. If this is a cloud environment, these attacks often stem from a remote file sync or other process driven by a third-party application or browser plug-in running the ransomware encryption process. Digging in and isolating the source of the ransomware attack can contain the infection so that the damage to data is mitigated. To be effective, this process must be automated.
Many attacks happen after-hours when admins are not monitoring the environment and the reaction must be rapid to stop the spread of the virus. Security policy rules and scripts must be put in place as a part of proactive protection. Thus, when an infection is identified, the automation kicks in to stop the attack by removing the executable file or extension and isolate the infected files from the rest of the environment.
Another way organizations can help protect themselves and contain the damage should an attack occur is by purchasing cyber liability insurance. Cyber liability insurance is a specialty insurance line intended to protect businesses (and the individuals providing services from those businesses) from internet-based risks (like ransomware attacks) and risks related to information technology infrastructure, information privacy, information governance liability, and other related activities. In this type of attack situation, cyber liability insurance can help relieve some of the financial burden of restoring your data.
3. Restore affected data
In most cases, even if the ransomware attack is detected and contained quickly, there will still be a subset of data that needs to be restored. This requires having good backups of your data to pull back to production. Following the 3-2-1 backup best practice, it’s imperative to have your backup data in a separate environment from production.
The 3-2-1 backup rule consists of the following guidelines:
- Keep 3 copies of any important file, one primary and two backups
- Keep the file on 2 different media types
- Maintain 1 copy offsite
If your backups are of cloud SaaS environments, storing these “offsite” using a cloud-to-cloud backup vendor aligns with this best practice. This will significantly minimize the chance that your backup data is affected along with your production data.
The tried and true way to recover from a ransomware attack involves having good backups of your business-critical data. The importance of backups cannot be stressed enough when it comes to ransomware. Recovering from backup allows you to be in control of getting your business data back and not the attacker.
All too often, businesses may assume incorrectly that the cloud service provider has “magically protected” their data. While there are a few mechanisms in place from the cloud service provider side, ultimately, the data is your responsibility as part of the shared responsibility model of most CSPs. You can take a look at Microsoft’s stance on shared responsibility here.
4. Notify the authorities
Many of the major compliance regulations that most organizations fall under today, such as PCI-DSS, HIPAA, GDPR, and others, require that organizations notify regulatory agencies of the breach. Notification of the breach should be immediate and the FBI’s Internet Crime Complaint Center should be the first organization alerted. Local law enforcement should be informed next. If your organization is in a governed industry, there may be strict guidelines regarding who to inform and when.
5. Test your access
Once data has been restored, test access to the data and any affected business-critical systems to ensure the recovery of the data and services have been successful. This will allow any remaining issues to be remedied before turning the entire system back over to production.
If you’re experiencing slower than usual response times in the IT environment or larger-than-normal file sizes, it may be a sign that something sinister is still looming in the database or storage.
Ransomware prevention v. recovery
Sometimes the best offense is a good defense. When it comes to ransomware and regaining access to critical files, there are only two options. You either restore your data from backup if you were forward-thinking enough to have such a system in place, or you have to pay the ransom. Beyond the obvious financial implications of acquiescing to the hacker’s demands, paying is risky because there is no way to ensure they will actually provide access to your files after the money is transferred.
There is no code of conduct or contract when negotiating with a criminal. A recent report found that some 42 percent of organizations who paid a ransom did not get their files decrypted.
Given the rising number of ransomware attacks targeting businesses, the consequences of not having a secure backup and detection system in place could be catastrophic to your business. Investing in a solution now helps ensure you won’t make a large donation to a nefarious organization later. Learning from the mistakes of other organizations can help protect yours from a similar fate.
Although 97% of organizations said that Active Directory (AD) is mission-critical, more than half never actually tested their AD cyber disaster recovery process or do not have a plan in place at all, a Semperis survey of over 350 identity-centric security leaders reveals.
“The expanded work-from-home environment makes organizational identity a priority and also increases the attack surface relative to Active Directory,” said Charles Kolodgy, Principal at Security Mindsets.
Key research findings
- AD outages have a serious business impact. 97% of respondents said that AD is mission-critical to the business, and 84% said that an AD outage would be significant, severe, or catastrophic.
- AD recovery failure rate is high. 71% of respondents were only somewhat confident, not confident, or unsure about their ability to recover AD to new servers in a timely fashion. Only a tiny portion (3%) said they were “extremely confident.”
- AD recovery processes remain largely untested. Exactly 33% of organizations said they have an AD cyber disaster recovery plan but never tested it, while 21% have no plan in place at all. Out of the entire poll, just 15% of respondents said they had tested their AD recovery plan in the last six months.
- Organizations expressed many concerns about AD recovery, with the lack of testing being the number one concern. This includes organizations that have not tested AD recovery at all and those who have tried but failed.
“In today’s cloud-first, mobile-first world, dependency on Active Directory is rapidly growing and so is the attack surface,” said Thomas LeDuc, VP of Marketing at Semperis. “One survey respondent even noted that a prolonged AD outage would be akin to a nuclear inferno. So, it’s clear that while organizations understand the importance of AD, they are a step behind in securely managing it, particularly as they support an expanding ecosystem of mobile workers, cloud services, and devices.”
As the gatekeeper to critical applications and data in 90% of organizations worldwide, AD has become a prime target for widespread cyberattacks that have crippled businesses and wreaked havoc on governments and non-proﬁts.
Active Directory recovery plan
- Minimize Active Directory’s attack surface: Lock down administrative access to the Active Directory service by implementing administrative tiering and secure administrative workstations, apply recommended policies and settings, and scan regularly for misconfigurations – accidental or malicious – that potentially expose your forest to abuse or attack.
- Monitor Active Directory for signs of compromise and roll back unauthorized changes: Enable both basic and advanced auditing and periodically review key events via a centralized console. Monitor object and attribute changes at the directory level and changes shared across domain controllers.
- Implement a scorched-earth recovery strategy in the event of a large-scale compromise: Widespread encryption of your network, including Active Directory, requires a solid, highly automated recovery strategy that includes offline backups for all your infrastructure components as well as the ability to restore those backups without reintroducing any malware that might be on them.
The process of evaluating solid state drives (SSDs) for enterprise applications can present a number of challenges. You want maximum performance for the most demanding servers running mission-critical workloads.
We sat down with Scott Hamilton, Senior Director, Product Management, Data Center Systems at Western Digital, to learn more about SSDs and how they fit into current business environments and data centers.
What features do SSDs need to have in order to offer uncompromised performance for the most demanding servers running mission-critical workloads in enterprise environments? What are some of the misconceptions IT leaders are facing when choosing SSDs?
First, IT leaders must understand environmental considerations, including the application, use case and its intended workload, before committing to specific SSDs. It’s well understood that uncompromised performance is paramount to support mission critical workloads in the enterprise environment. However, performance has different meanings to different customers for their respective use cases and available infrastructure.
Uncompromised performance may focus more on latency (and associated consistency), IOPs (and queue depth) or throughput (and block size) depending on the use case and application.
Additionally, the scale of the application and solution dictate the level of emphasis, whether it be interface-, device-, or system-level performance. Similarly, mission-critical workloads may have different expectations or requirements e.g. high availability support, disaster recovery, or performance and performance consistency. This is where IT leaders need to rationalize and test the best fit for their use case.
Today there are many different SSD segments that fit certain types of infrastructure choices and use cases. For example, PCIe SSD options are available from boot drives to performance NVMe SSDs and they come in different form factors such as M.2 (ultra- light and thin) and U.2 (standard 2.5-inch) to name a few. It’s also important to consider power/performance. Some applications do not require interface saturation, and can leverage low-power, single-port mainstream SSDs instead of dual-port, high-power, higher-endurance and higher-performance drives.
IT managers have choices today, which they should consider carefully to rationalize, optimize, infrastructure elasticity and scaling, test and ultimately align their future system architecture strategies when it comes to choosing the best fit SSD. My final word of advice: Sometimes it is not wise to pick the highest performing SSD available on the market as you do not want to pay for a rocket engine for a bike. Understanding the use case and success metrics – e.g., price-capacity, latency, price performance (either $/IOPs or $/GB/sec) – will help eliminate some of the misconceptions IT leaders face when choosing SSDs.
How has the pandemic accelerated cloud adoption and how has that translated to digital transformation efforts and the creation of agile data infrastructures?
The rapid increase in our global online footprint is stressing IT infrastructure from virtual office, live video calls, online classes, healthcare services and content streaming to social media, instant messaging services, gaming and e-commerce. This is the new normal of our personal and professional lives. There is no doubt that the pandemic has increased dependence on cloud data centers and services. Private, public and hybrid cloud use cases will continue to co-exist due to costs, data governance and strategies, security and legacy application support.
Digital transformation continues all around us, and the pandemic accelerated these initiatives. Before the pandemic, digital transformation projects generally spanned over several years with lengthy and exhaustive cycles to go online and scale up their web foot print. However, 2020 has really surprised all of us. Tectonic shifts have happened (and are still happening) with projects now taking only weeks or months even for businesses that are learning to scale up for the first time.
This infrastructure stress will further accelerate technological shifts at as well, whether it be from SAS to NVMe at the endpoints or from DAS- or SAN-based solutions to NVMe over Fabrics (NVMe-oF) based solutions to deliver greater agility to meet both dynamic and unforeseen demands of the future.
Organizations are scrambling to update their infrastructure, and many are battling inefficient data silos and large operational expenses. How can data centers take full advantage of modern NVMe SSDs?
NVMe SSDs are playing a pivotal role in making the new reality possible for the people and businesses around the world. As users transition from SAS and SATA, NVMe is not only increasing overall system performance and utilization, it’s creating next-generation flexible and agile IT infrastructure as well. Capitalizing on the power of NVMe, SSDs now enable data centers to run more services on their hardware i.e., improved utilization. This is an important consideration for IT leaders and organizations who are looking to improve efficiencies.
NVMe SSDs are helping both public and private cloud infrastructures in various areas such as the highest performance storage, the lowest latency interface and the flexibility to support needs from boot to high-performance compute as well as infrastructure productivity. NVMe supports enterprise specifications for server and storage systems such as namespaces, virtualization support, scatter gather list, reservations, fused operations, and emerging technologies such as Zoned Namespaces (ZNS).
Additionally, NVMe-oF extends the benefits of NVMe technology and enables sharing data between hosts and NVMe-based platforms over a fabric. The ratification of the NVMe 1.4 and NVMe-oF 1.1 specifications, with the addition of ZNS, have further strengthened NVMe’s position in enterprise data centers. Therefore, by introducing NVMe SSDs into their infrastructure, organizations will have the tools to get more from their data assets.
What kind of demand for faster hardware do you expect in the next five years?
Now and into the future, data centers of all shapes and sizes are constantly striving to achieve greater scalability, efficiencies and increased productivity and responsiveness with the best TCO. Business leaders and IT decision-makers must understand and navigate through the complexities of cloud, edge and hybrid on-prem data center technologies and architectures, which are increasingly being relied upon to support a growing and complex ecosystem of workloads, applications and AI/IoT datasets.
More than a decade ago, IT systems used to rely on software running on dedicated general purpose systems for any applications. This created many inefficiencies and scaling challenges, especially with large scale system designs. Today, data dependence has been consistently and exponentially growing, which has forced data center architects to decouple the applications from the systems. This was the birth of the HCI market and now the composable disaggregated infrastructure market.
Next-generation infrastructures are moving to disaggregated, pooled resources (e.g., compute, accelerators and storage) that can be dynamically composed to meet the ever increasing and somewhat unpredictable demands of the future. All of this allows us to make efficient use of hardware to increase infrastructure agility, scalability and software control, remove various system bottlenecks and improve overall TCO.
Phishing scams tied to COVID-19 show no signs of stopping. More than 3,142 phishing and counterfeit pages went live every day in January, and by March, the number had grown to 8,342. In mid-April, Google reported they saw more than 18 million pandemic-related malware and phishing emails each day over the course of just a single week. By mid-May, a new high in cybercriminal activity was set and coronavirus clearly had played a major role.
The main cause of data breaches continues to be human error. With so many employees suddenly working from home – cut off from everyday contact with IT – the pandemic has offered hackers an ideal period to exploit a lack of security vigilance. Outdated home software, forgotten updates, skipped patches… Aside from a welcome mat, hackers couldn’t have a more gracious invitation or an easier path into a company.
IT concern and chaos
For IT, the biggest concern with a remote workforce is the inability to control the network in a traditional sense. Perhaps their greatest fear is a ransomware attack on company data made possible by users connected through their VPN and attaching to file shares.
With the pandemic, more people are seeking information and visiting websites with charts and graphs holding related statistics. Sadly, bogus or malicious sites take advantage of the situation. Making matters worse, networks are often shared with others, such as the employee’s children, who use them for recreational activities but aren’t so savvy at identifying threats. Most ransomware attacks are the result of visiting hacked or malicious websites or clicking on an infected email attachment.
Attackers have been taking advantage of remote work “chaos” and the onslaught is unsettling. We’re seeing an uptick in gathering attempts, raising malicious code and ransomware instances because people are visiting places they normally wouldn’t and hackers are leveraging changes in work habits.
Malware is increasingly holding company resources and data for ransom, which in addition to that expense can cause costly downtime, negatively impact a company’s reputation and more.
Backup and disaster recovery (DR) technologies have progressed in recent years, reducing recovery point and time objectives (RPO and RTO). However, they haven’t kept pace with hackers, and the backup process is a significant administrative and management burden.
One step forward, two steps back
Ransomware attacks are extremely disruptive. IT needs to figure out how the infection started and see if they can prevent it from happening again. It’s imperative to have a reliable backup copy from before the infection, but in some cases, ransomware can even encrypt those along with the original files. A lot of details need to be worked out.
The problem is, traditional backups – while often an organization’s last line of defense against a disaster – are outdated and cost companies a lot of time and money. Configuring incremental and full backup schedules or pulling backups across a WAN to a central site is cumbersome at best, unreliable at worst. So is babysitting backups to find out if they worked, and rotating and refreshing tape and disk media.
In the end, it still takes days or hours to recover.
Not only does this pose significant administrative and management burden, backup remains an expensive bolt-on to storage systems. In large organizations, entire teams are dedicated to managing the backup process and ensuring their integrity. Faulty or corrupt ones remain a significant problem, in fact, ransomware can deliver code that works its way through systems over time before attacking data.
Unfortunately, backing up to just before the point-of-origin could actually set the attack in motion all over again.
Getting ahead without backing up
In a perfect world, you wouldn’t need to buy a data protection solution and your storage system would protect itself. But the world is not perfect and that’s why enterprises deploy a storage system with backup and DR. That said, though, today there actually isn’t a need for separate storage and backup systems.
By taking advantage of the cloud, global file systems can enable companies of all sizes to store, access and share file data without further backup and DR systems. They can take snapshots to capture changes – every five minutes for active data – which are sent to the cloud where the gold copy is kept. The global file system can store these in the cloud without any significant additional cost.
If snapshots are written to the cloud as Write Once Read Many (WORM) objects, data is prevented from being corrupted or overwritten. With separate metadata versions for each snapshot, restoring a file or even multiple terabytes of data takes just seconds, eliminating a full restore or migration.
What makes the process fast is a you only need to point to an earlier version of the files; there’s no need to undergo a slow copy. Because the gold copy is incremental, you’ll likely find a version that was captured just minutes before the point of infection.
Simply put, self-protecting, cloud-based global file systems do away with the need for a separate backup system. With this approach, not only does IT no longer need to dedicate time and resources to backup management, they gain better RPOs and RTOs and the ability to recover from ransomware attacks in minutes. For many IT leaders in 2020, the first step to effectively countering ransomware and ensuring their enterprises continue to move forward will be to stop backing up.
The vast majority of SMBs both expect the unexpected and feel that they’re ready for disaster – though they may not be, Infrascale reveals.
Ninety-two percent of SMB executives said they believe their businesses are prepared to recover from a disaster. However, as previously reported, more than a fifth of SMB leaders said they don’t have a data backup or disaster recovery solution in place.
The research also indicates that 16% of SMB executives admitted they do not know their own Recovery Time Objectives (RTOs), although 24% expect to recover their data in less than 10 minutes after a disaster and 29% expect to do so in under one hour following a disaster. An RTO is the time between the start of recovery to the point at which all of an organization’s infrastructure and services are available.
Survey results also highlight that there’s no common understanding of disaster recovery and that expectations around disaster recovery solution results and recovery times differ by industry. There is also sector variation in why businesses that feel unprepared for disaster remain unprepared.
“The latest results from our survey are quite surprising, as they suggest that most SMBs think they are prepared to recover their data and be back up and running after a disaster. Yet more than one in five of those same respondents said they do not have a disaster recovery or backup solution in place,” said Russell P. Reeder, CEO of Infrascale.
“That data suggests that there are either varying definitions of what it means to be able to recover from a disaster or, quite simply, a lack of understanding of what it truly means to be able to recover from a disaster. Make no mistake, if a business does not have a disaster recovery solution in place, or at the very least a solution to back up its data, there is no way it can get the data back from a data loss event.”
The research is based on a survey of more than 500 C-level executives at SMBs. CEOs represented 87% of the group. Almost all of the remainder was split between CIOs and CTOs.
A gap between expectation and reality
While 84% of the total SMB survey group said they are aware of their organizations’ RTO, the rest revealed that they are not. More business-to-consumer (B2C) company leaders are in the dark about their organizations’ RTOs than business-to-business (B2B) C-level executives. 22% admitted they do not know their RTOs, while 10% of B2B leaders said they lack such knowledge.
Of those who were able to state their RTOs, 9% said they have an RTO of one minute or less. 30% said they have an RTO of under an hour. And 17% said they have an RTO of one day. But expectations are clearly not the reality in this scenario without redundancy, automation, and a substantial budget to pay for it.
The research also analyzed RTO from an industry vertical perspective. It found that 26% of telecommunications leaders said their RTO was 10 minutes. This was the No. 1 answer for this sector.
Meanwhile, the top answer of executives in the accounting/finance/banking and retail/e-commerce sectors said their RTO was under an hour, with this answer getting 36% and 29% of the votes, respectively. The No. 1 answer for healthcare, garnering a 35% share from this sector, was an RTO of one day.
“Having a low RTO can be achieved one of two ways: you either have redundant, highly automated infrastructure or an expensive disaster recovery solution. If you’re willing to trade just a little amount of time for cost, you can achieve a reasonable RTO with an affordable disaster recovery solution,” said Reeder.
“Every industry uses technology differently to achieve their business goals, which in turn will have a different requirement around the redundancy and availability of their systems. While it may be possible to have an RTO of less than one minute if you implement redundant systems, those costs usually outweigh the benefits.”
When business leaders were asked how long they expect it will take to recover their data after a disaster, 24% of the total group and a 33% of telecommunications executives said under 10 minutes. Thirty-eight percent of the accounting/finance/banking group and 31% of retail/e-commerce leaders said under one hour.
Disaster recovery has a range of definitions and industry vertical viewpoints
The one thing that everyone can agree on is that disaster recovery is needed in multiple scenarios. Fifty-eight percent of the total survey group said disaster recovery means recovering data after data loss and 55% said it involves recovery from a malware attack. 54% said disaster recovery provides the ability to become operational quickly after a disaster.
“The fact that 58% of the survey group said disaster recovery means getting data back after a loss, yet one in five say they don’t have a solution in place to do this, and most SMBs still believe they are prepared to recover for a disaster does not add up,” said Reeder.
“It highlights the need for SMBs to do detailed assessments on their true disaster recovery readiness or face the very real risk of being totally unprepared in the unfortunate but ever-present event of a disaster.”
The telecommunications sector survey group most commonly described disaster recovery as recovering data after data loss, with 59% of these respondents voicing this opinion. The healthcare (68%) and retail/e-commerce (66%) groups indicated that they see disaster recovery primarily as the ability to become operational quickly after a disaster.
Meanwhile, 56% of SMBs in the accounting/finance/banking sector defined disaster recovery as the ability to recover from a natural disaster like a hurricane or tornado.
74% of retail/e-commerce and 73% of healthcare industry executives said their top expectation of a disaster recovery solution is to minimize the time until their business is fully operational following a disaster.
Sixty-four percent of accounting/finance/banking sector leaders said zero data loss is their top expectation of a disaster recovery solution. Telecommunications leaders indicated their top expectation of a disaster recovery solution is to deliver cost savings related to on-call IT technicians, with 63% providing that answer.
“Every business is unique. But one thing all organizations and sectors have in common is the need to eliminate downtime and data loss,” said Reeder.
“Whether a business is dealing with a server crash or a site-wide disaster, unplanned downtime comes with serious consequences. Businesses can dramatically reduce downtime, quickly recover from ransomware attacks, and avoid paying ransoms by employing disaster recovery as a service.
“SMBs also can get ahead of an anticipated disaster such as a hurricane by failing over to their disaster recovery solution before the disaster is expected to hit, completely mitigating any downtime.”
Different industry sectors provide different reasons for their lack of preparedness
Most SMBs expressed the belief that they are prepared to recover from disaster, but 8% admitted they do not feel they are ready to bounce back from one. Of this latter group, 39% said they don’t have the budget to prepare to recover from a disaster.
Thirty-seven percent said they are unprepared because they have limited time to research solutions. 32% said they are not prepared because they lack the right resources. 27% said they don’t have the technology in place to recover from a disaster.
Healthcare (67%) and business-to-consumer entities (48%) both said the top reason their organizations are not prepared to recover from a disaster is that they have limited time to research solutions. 50% of the SMBs in the accounting/finance/banking group said their businesses are not prepared because their IT teams are stretched.
The top answers from the business-to-business survey group regarding lack of preparedness to recover from a disaster were that they don’t have the right resources or the budget. Both answers garnered 31% of the vote from business-to-business organizations.
“This survey data highlights how important it is for businesses to understand and address their disaster recovery risks before it’s too late,” said Reeder.
Most SMBs have faced micro-disasters in the past year
Yet for any differences among industry sectors, one thing all SMBs seem to have in common is suffering from malware infections, corrupted hard drives, and/or other micro-disasters. 51% of the survey group said they had faced such events within the past year.
B2B entities were more likely than B2C organizations to have been subjected to such scenarios. While 41% of B2C organizations have experienced a micro-disaster in the past year, 59% of B2B entities admitted they have had to face such a situation.
22% of the total survey group said they have experienced a micro-disaster more than once within the past year. 24% of B2B organizations said they have had such repeat experiences, while micro-disasters have hit 20% of B2Cs in the past year.
Businesses are are adapting IT strategies, reprioritizing cloud adoption and automated database monitoring due to the effects of a global lockdown, remote working and a focus on business continuity, according to Redgate.
The report, which surveyed nearly 1,000 respondents in April 2020, reveals that while performance monitoring and backups remain the most common responsibilities for database professionals, managing security and user permissions have leapt to third and fourth place, respectively.
However, there seems to be a learning curve. As database professionals adopt these new roles, respondents say that staffing and recruitment is the second biggest challenge in managing estates.
Additionally, the two biggest causes of problems with database management come from human error (23%) and ad hoc user access (18%), which could be a result of increased remote working as tasks become more widely distributed.
Increase in the use of cloud-based platforms
In support of remote teams, respondents reported a rapid increase in the use of cloud-based platforms, particularly Microsoft Azure, which is up 15 percentage points in the last year.
With many businesses like Twitter announcing that remote working will become business-as-usual in the future, the report highlights why effective, reliable monitoring of database estates is critical to business longevity.
Perhaps as a consequence, only 18% of respondents continue to monitor their estates manually, and for those who are managing 50 instances or more, the number using a monitoring tool rises to 90%.
Cloud migration and monitoring are the biggest challenges
Microsoft Azure remains the most used cloud platform, with 20% of respondents using it frequently, and a further 34% using it occasionally, but migrating to the cloud can be difficult, and doing so with a distributed team doesn’t make things easier.
Estates are growing
Organizations with fewer than 100 instances have dropped for a second year, those with over 100 instances have grown – and estates with over 1,000 instances grew by nine percentage points.
Monitoring is key to Database DevOps success
Satisfaction with monitoring tools is at an all-time high
68% of respondents say they are happy with their third-party monitoring tools, up seven percentage points on 2019, which may reflect the increased reliance on using such tools to monitor estates remotely.
SQL Server remains the most popular database platform
SQL Server is used by 81% of respondents, followed by MySQL at 33%, Oracle at 29%, and PostgreSQL at 21% (multiple platforms are often in use and respondents could choose more than one platform).
As Grant Fritchey, author and co-author of several books on SQL Server and a DevOps Advocate for Redgate, comments: “While our research focused on the need for database monitoring, the issues it uncovered are practically universal given the current business environment.
“For example, we know that recruitment may be challenging for many, and there is a renewed desire to adopt technologies like the cloud, while still improving performance. And with the uncertainty ahead, we could see lasting changes for years to come.”
42% of companies experienced a data loss event that resulted in downtime last year. That high number is likely caused by the fact that while nearly 90% are backing up the IT components they’re responsible for protecting, only 41% back up daily – leaving many businesses with gaps in the valuable data available for recovery.
In order to select an appropriate backup solution for your business, you need to think about a variety of factors. We’ve talked to several industry professionals to get their insight on the topic.
Oussama El-Hilali, CTO, Arcserve
Before selecting a backup solution, IT leaders must ask themselves where the majority of data generated by their organization resides. As SaaS-based collaboration and storage systems grow in popularity, it’s essential to choose a backup solution that can protect their IT environment.
Many people assume cloud platforms automatically back up their data, but this largely isn’t the case. They’ll need a solution with SaaS backup capabilities in place to safeguard against cyberattacks and IT outages.
To further prevent downtime, organizations should also consider backup solutions that offer continuous replication of data. That way, in case of unplanned outages, they can seamlessly fail over to a replica of their systems, applications and data to keep the organization up and running. This is also helpful in case of a ransomware attack or other data corruption – organizations can revert to a “known good” state of their data and pick up where they left off before the incident. Generally, all backup tools should provide redundancy by using the rule of three – have at least three copies of your data, store the copies on at least two different media types, and keep at least one of those copies offsite.
Finally, it’s important to weigh the pros and cons of on-prem versus cloud-based backups. Users should keep in mind that, in general, on-prem hardware is more susceptible to data loss in the event of a natural disaster. There’s no “one size fits all” solution for every organization, so it’s best to take a holistic look at your specific needs before you start looking for a solution – and continue to revisit and update the plan as your organization evolves.
Nathan Fouarge, VP Of Strategic Solutions, NovaStor
When looking for a backup solution for your business there are a number of questions to ask to narrow down the solutions you want to look at.
Here’s what you should be prepared to answer in order to select a backup solution for your business:
- How much downtime can you afford, or how fast do you need to be back up and running? In other words what is your restore time objective (RTO).
- How much data am I willing to lose? In other words what is your restore point objective (RPO). Are you willing to just take daily backups so you have the possibility to lose an entire days’ worth of work or do you need a solution that can do hourly or continuous backup?
- How long do I need to keep historical data? Do you have some compliancy requirements that makes you keep your data for a long time?
- How much data do you have to backup and what type of data/applications do you need to back up?
- How many copies of the data and where do you want to store it? Do you want to do the recommended 3-2-1 backup solution so 3 copies of the data. Do you want to keep all the backups locally, offsite(USB drive or replicated NAS), cloud?
- Then the ultimate question of how much you are willing to spend for a backup solution.
Once you have all of those questions answered then you can look into what solutions fit your into what you are looking for. More than likely once you start looking for solutions that fit your criteria you will have to reevaluate some of the answers to the questions above.
Konstantin Komarov, CEO, Paragon Software
The most important part is how you backup your data, not how you organize it. The key aim is to provide the safety regardless of whether you back up a single database or clone the entire system. The best practice and the most cost-effective way would be to implement “incremental backups” and replicate the data both to the local storage and to the cloud.
Incremental backup is an approach when replication is performed only to some updated part of the system or database, not the entire one. This enables to shorten the time of the backup process and amount of storage space used. Replication to both the local storage and to the cloud may guaranty the best safety of your data in case the physical disk you are baking the data up to is damaged or lost.
However, to make the backup effective and non-stop, it needs to be scheduled and managed with an application deployed on some dedicated end-point which should work side-by-side with your IT infrastructure not to slow down or prevent the entire system. So, the best decision would be to build up your own backup, using open cloud backup platforms, which consists of the ready-to-go algorithm and tools to create a solution fully adjusted to the needs of a particular business.
Ahin Thomas, VP, Backblaze
When choosing a backup solution for your business, consider three factors: optimize for remote first, sync vs. backup, and recovery.
As businesses grow, implementing a strong backup strategy is challenging, especially when access to employees can change at a moment’s notice. That’s why it’s important to have a backup solution that is easy to deploy and requires little to no interfacing with employees—your COVID-stressed IT team will thank you.
Secondly, Dropbox and Google Drive folders are not backup solutions. They require users to drop files in designated folders, and any changes made to a file are synced across every device. A good backup solution will ensure all data is backed up to the cloud, and will work automatically in the background, backing up all new or changed data.
Data recovery is the final piece of the puzzle, and most often overlooked. Data loss emergencies are stressful, so it is vitally important to understand how recovery works before you choose a solution. Make sure it’s fast, easy, and works whether you’re on or off site. And test it regularly! You never know when your coworker (aka kid) will spill a sippy cup all over your laptop.
Nigel Tozer, Solutions Director EMEA, Commvault
For many organizations, the realization that their backup products are no longer fit for purpose comes as a very unwelcome discovery. Anyone arriving at this kind of crossroads faces some big decisions: one of the most frequently occurring is whether to add to what you have, or go for something new.
For anyone in that position, there are four simple considerations that can help inform decisions about backup strategy:
- Flexibility – Make sure your backup solution supports a wider ecosystem than just what you’re using today. You don’t want it to hinder your agility or cloud adoption down the line.
- Automation – Look for solutions where intelligent automation, even AI, can help dispense with the specialist or mundane elements of backup processes and free up busy IT teams’ time.
- Budget – Low cost software that needs a dedupe appliance as you grow, or an appliance with a rigid upgrade path can turn out to be more costly long term – so do your research.
- Consolidation – Many products typically means silos, wasted space and more complexity. Consolidating to a backup platform instead of multiple products can make a real difference in infrastructure savings, and reduced complexity.
Cybersecurity and, to a lesser but growing extent, compliance are the most pressing priorities for MSPs and their customers this year, according to a Kaseya survey of 1,300 owners and technicians of MSP firms in more than 50 countries.
“Respondents to this year’s survey overwhelmingly agreed that their clients need more cybersecurity support from them. This is especially true in today’s uncertain environment,” said Jim Lippie, senior vice president and GM of partner development at Kaseya.
“As more small and midsize businesses look to maintain vital security operations and decrease IT costs internally ahead of an economic downturn, they will lean on the expertise and services provided by MSPs to keep their companies operating.”
While responses to the 2020 survey were collected in December 2019 prior to the coronavirus crisis, the pandemic has only increased the focus on a need for expanded IT security measures.
Companies of all sizes have recently seen an increase in cyberattacks with an influx of personal devices connecting to the corporate network and as malicious actors hope to take advantage of the uncertain times.
“More than half, or 60 percent, of our respondents said their clients experienced downtime from an outage in the past year,” Lippie continued.
“In our current, unprecedented climate, an outage can mean the end for a small business. So for MSPs, who are the IT backbone of these small businesses, there’s a significant opportunity to diversify their clients’ cybersecurity solutions and strategy in order to respond agilely to any threat that comes their way and maintain their livelihood.”
MSPs and priorities: Security dominates
Both MSPs and their customers have faced increased security threats year over year. Because MSPs have access to their clients’ IT environments through remote monitoring and management (RMM) tools, they are an ideal target for malicious actors who see opportunity in the ability to extend the impact of their attacks. In fact, a little more than 1 in 3 respondents (37 percent) said they felt their MSP business was more prone to cybercrime now than it was in 2019.
On top of the concern for their own organization’s security, MSPs must contend with increased cyber risks to their clients. Almost all respondents (95 percent) have had either some or most of their clients turn to them for counsel on cybersecurity plans and best practices.
Additionally, nearly three in four respondents said that 10 to 20 percent of their clients experienced at least one cyberattack in the past year.
Companies need more cybersecurity support from their MSP partners. Among a ranking of several top IT needs, such as “supporting mobile devices,” “legacy system replacement” and “public cloud adoption, migration and support,” 29 percent of respondents listed “meeting security risks” as their clients’ top IT need.
“Cybersecurity services,” like antivirus, anti malware and ransomware protection, followed closely at 14 percent. Together, these two options make up more than 40 percent of responses to the question. With ransomware and malware attacks making headlines every day, MSPs have an opportunity to protect existing and future customers by providing multi-layered security and backup services.
The need for compliance services is growing
With the increasing number of regulations, including the CCPA and the New York Stop Hacks and Improve Electronic Data (SHIELD) Security Act, data privacy has become a necessity for small and large organizations alike. In fact, two-thirds of respondents reported that their clients struggle to meet compliance requirements, and nearly one-third reported an increased need for compliance services in the past two years.
As our dependence on software and other technologies grow, regulators will continue to enact data privacy laws. This presents an opportunity for MSPs to develop and leverage a niche expertise in this space to help clients maintain compliance with an increasingly complex set of regulations.
RMM remains MSPs’ core application of choice
For more than half of respondents (61 percent), RMM remains the most important application, followed by PSA (21 percent) and IT documentation (11 percent).
More important than the applications themselves, however, is integration between these core applications. In fact, nearly 70 percent of respondents said that integration between their core IT applications is very important, and 81 percent responded that this integration could help their organization drive better bottom-line profits.
MSPs show growth through new offerings and value-based pricing
In the past decade, MSPs have evolved greatly from simply providing break-fix services to implementing full-fledged suites of solutions. Driving this evolution is the ability of MSPs to agilely respond to emerging needs in the market.
Nearly 90 percent of respondents consider the expansion of their service offerings important, which makes sense: The most successful, high-growth MSPs — those with an average monthly recurring revenue growth greater than 20 percent — have added about four to five new services to their offerings in the past two years.
Underlying all of this growth is a continued shift toward value-based pricing models. Respondents this year opted for a value-based pricing strategy rather than cost-based or price-match strategies. Value-based pricing strategies develop prices based on the end result and the value delivered to the customer.
Among all respondents, 38 percent reported that more than half of their revenue comes from a value-based pricing strategy. Contrastingly, only 17 percent of respondents reported that the majority of their revenue came from a cost-based pricing strategy.
Cloud support decreases but remains an opportunity for MSP growth
Public and private cloud adoption are among the top IT needs in 2020. However, respondents who manage client cloud environments dropped from 70 percent in the 2019 survey to 56 percent this year for public cloud, and from 59 percent in the 2019 survey to 49 percent for private cloud.
Despite this, there still remains an opportunity for MSPs to grow their cloud management offerings, as nearly a quarter (21 percent) of successful, high-growth MSPs manage their clients’ public cloud environments.
42% of companies experienced a data loss event that resulted in downtime last year, according to Acronis. That high number is likely caused by the fact that while nearly 90% are backing up the IT components they’re responsible for protecting, only 41% back up daily – leaving many businesses with gaps in the valuable data available for recovery.
The figures revealed illustrate the new reality that traditional strategies and solutions to data protection are no longer able to keep up with the modern IT needs of individuals and organizations.
The importance of implementing a cyber protection strategy
The annual survey, completed this year by nearly 3,000 people, gauges the protection habits of users around the globe. The findings revealed that while 91% of individuals back up data and devices, 68% still lose data as a result of accidental deletion, hardware or software failure, or an out-of-date backup.
Meanwhile, 85% of organizations aren’t backing up multiple times per day, only 15% report they are. 26% back up daily, 28% back up weekly, 20% back up monthly, and 10% aren’t backing up at all, which can mean days, weeks, or months of data lost with no possibility of complete recovery.
Of those professional users who don’t back up, nearly 50% believe backups aren’t necessary. A belief the survey contradicts: 42% of organizations reported data loss resulting in downtime this year and 41% report losing productivity or money due to data inaccessibility.
Furthermore, only 17% of personal users and 20% of IT professionals follow best practices, employing hybrid backups on local media and in the cloud.
These findings stress the importance of implementing a cyber protection strategy that includes backing up your data multiple times a day and practicing the 3-2-1 backup rule: create three copies of your data (one primary copy and two backups), store your copies in at least two types of storage media, and store one of these copies remotely or in the cloud.
“Individuals and organizations keep suffering from data loss and cyberattacks. Everything around us is rapidly becoming dependent on digital, and it is time for everyone to take cyber protection seriously,” said Acronis Chief Cyber Officer, Gaidar Magdanurov.
“Cyber protection in the digital world becomes the fifth basic human need, especially during this unprecedented time when many people must work remotely and use less secure home networks.
“It is critical to proactively implement a cyber protection strategy that ensures the safety, accessibility, privacy, authenticity, and security of all data, applications, and systems – whether you’re a home user, an IT professional, or an IT service provider.”
Cyber protection changes the game
With increasing cyberattacks, traditional backup is no longer sufficient to protect data, applications, and systems, relying on backup alone for true business continuity is too dangerous. Cybercriminals target backup software with ransomware and try to modify backup files, which magnifies the need for authenticity verification when restoring workloads.
It makes sense, then, that the survey indicated a universally high level of concern about cyberthreats like ransomware. 88% of IT professionals reported concern over ransomware, 86% are concerned about cryptojacking, 87% are concerned about social engineering attacks like phishing, and 91% are concerned about data breaches.
Among personal users, awareness and concern regarding all four of these threat types were nearly as high. In fact, compared to the 2019 survey their concern about cyberthreats rose by 33%.
The survey also revealed a lack of insight into data management, exposing a great need for cyber protection solutions with greater visibility and analytics. The surprising findings indicate that 30% of personal users and 12% of IT professionals wouldn’t know if their data was modified unexpectedly.
30% of personal users and 13% of IT professionals aren’t sure if their anti-malware solution stops zero-day threats. Additionally, 9% of organizations reported that they didn’t know if they experienced downtime as a result of data loss this year.
To ensure complete protection, secure backups must be part of an organization’s comprehensive cyber protection approach, which includes ransomware protection, disaster recovery, cybersecurity, and management tools.
Cyber protection recommendations
Whether you are concerned about personal files or your company’s business continuity, there are five simple recommendations to ensure fast, efficient, and secure protection of your workloads:
- Always create backups of important data. Keep multiple copies of the backup both locally (so it’s available for fast, frequent recoveries) and in the cloud (to guarantee you have everything if a fire, flood, or disaster hits your facilities).
- Ensure your operating systems and applications are current. Relying on outdated OSes or apps means they lack the bug fixes and security patches that help block cybercriminals from gaining access to your systems.
- Beware suspicious email, links, and attachments. Most virus and ransomware infections are the result of social engineering techniques that trick unsuspecting individuals into opening infected email attachments or clicking on links to websites that host malware.
- Install anti-virus, anti-malware, and anti-ransomware software while enabling automatic updates so your system is protected against malware, with the best software also able to protect against zero-day threats.
- Consider deploying an integrated cyber protection solution that combines backup, anti-ransomware, anti-virus, vulnerability assessment and patch management in a single solution. An integrated solution increases ease of use, efficiency and reliability of protection.
58% of C-level executives at small and medium businesses (SMBs) said their biggest data storage challenge is security vulnerability, according to Infrascale.
The research, conducted in March 2020, is based on a survey of more than 500 C-level executives. CEOs represented 87% of the group. Almost all of the remainder was split between CIOs and CTOs.
“Our research indicates that 21% of SMBs do not have data protection solutions in place,” said Russell P. Reeder, CEO of Infrascale. “That’s a problem, because every modern company depends on data and operational uptime for its very survival. And this has never been more important than during the unprecedented times we are currently facing.”
Data protection means different things to different people
Certain aspects of data protection are more important than others depending upon an individual’s unique experiences and position. But data protection clearly delivers significant value from many vantage points.
When asked what data protection means to them, 61% of the survey group named data security and encryption. The same share said data backup. Nearly as many (59%) defined data protection as data recovery, while 54% cited anti-malware services.
Forty-six percent said data protection addresses email protection. Data archiving and the ability to become operational quickly after a disaster each captured 45% of the survey group’s vote.
Meanwhile, 44% of the group said data protection means ransomware protection/mitigation. The same share named physical device protection for endpoints such as laptops and mobile phones. And 32% said that for them data protection involves processes that prevent user error.
“Data protection can come into play in a wide array of important ways – including data security and encryption, data recovery, email protection and data archiving. It also provides the ability to recover quickly from a disaster, protection from and mitigation of ransomware, and physical device protection. Plus, it can prevent user error,” said Reeder.
“All of the above are valuable for businesses. These benefits contribute to the success of many businesses today, and implementing data protection to these ends will better position organizations for the future.”
Opinions about data protection vary by industry as well
The research suggests there is significant variation in what top executives from different sectors consider the most important aspects of data protection.
On the legal space, 89% of executives said data protection provides data security and encryption. Seventy-one percent of the top leaders in the healthcare sector agreed. Data security and encryption was the top answer among retail/ecommerce and telecommunications leaders as well, although with lower shares – 67% and 52%, respectively.
Top executives in education see data backup and data recovery as the most important aspects of data protection. Sixty-one percent of this group said they hold this belief. For 57% of the top leaders in accounting, banking or finance, data backup is the key concern in data protection.
Cyberattacks are SMB leaders’ top overall data protection concern
The overall survey group said cyberattacks are the biggest data protection issue their companies are facing. Nearly half (49%) of the group voiced their concern about hacking.
Micro disasters such as corrupted hard drives and malware infections were the second most commonly indicated concern, garnering a 46% share from the group. System crashes (41%), data leaks (39%), ransomware attacks (38%), and human errors (38%) were next on the list.
There was some variation in sector response here as well. Top leaders in education (64%), telecommunications (63%) and healthcare (54%) said that micro disasters are their biggest data protection issues.
But more than half of the survey respondents in both the retail (54%) and financial sectors (53%) said cyberattacks such as ransomware are their leading data protection challenges.
“Cyberattacks like ransomware are a major challenge for businesses today,” said Reeder. “But organizations can put defensive measures in place to lower their susceptibility to attack.”
Most SMBs have data protection in place, but those that don’t remain unprotected
Views about data protection definitions – and what is most important to the protection of SMB data – may vary. But most SMBs clearly believe it is important to have a data protection and/or backup and disaster recovery solution in place, as 79% of the survey group said they already do.
However, while the majority has taken steps to protect data, the remainder – which represents a significant share at 21% – clearly has not. And 13% of SMB C-level executives said they do not have any data protection strategy in place. That leaves these businesses vulnerable.
“Each organization is different,” said Reeder. “But one thing all businesses have in common is a desire to eradicate downtime and data loss. Organizations can and should protect their data, and their businesses as a whole, by enabling comprehensive data protection with modern backup and disaster recovery solutions and strategies.”
In an era of technological transformation and cyber everywhere, the attack surface is exponentially growing as cyber criminals attack operational systems and backup capabilities simultaneously in highly sophisticated ways leading to enterprise-wide destructive cyberattacks, a Deloitte survey reveals.
Majority of C-suite and executive poll respondents (64.6%) report that the growing threat of destructive cyberattacks is one of the top cyber risks at their organization.
It’s time for senior leadership to modernize risk management programs and solutions to keep pace with the current threats and technologies to incorporate new educational tools, technical solutions and business strategies.
A truly viable cyber resilience program can benefit an organization’s ability to recover, respond and be ready for a destructive cyberattack, where over a quarter of respondents (27.2%) believe a comprehensive approach to cyber resilience would most improve their organizations’ approach address these potential extinction-level events.
Why it matters
The well-publicized impact of the NotPetya attack, for example, spread beyond it’s intended target in seconds, and highlights how cyberattacks can compromise countless devices and spread across global networks in seconds rendering servers and endpoints inoperable.
From destructive malware to the growing threat of ransomware, attacks like these can propagate quickly and extensively impact an entire enterprise network.
Even organizations with fundamentally sound risk management programs will need to adapt to emerging and elusive cyber risks and the destructive impacts they present. Improving cyberattack readiness, response, and recovery will require a new approach to many traditional risk domains.
Why are these attacks so successful?
- Poor access management: A fundamental issue that is pervasive and is often the open door through which a destructive attack will initiate and spread.
- Weak cyber hygiene: Poor cyber hygiene has a direct impact on enterprise security and can be most commonly seen in the form of missing patches, misconfigurations of systems, partially deployed security tools, poor asset discovery and tracking.
- Poor asset management: This can happen when organizations have no knowledge of specific applications, operating systems, or other device information, and the relationship between those applications.
- Flat networks: Flat networks allow an adversary to easily maneuver to any system. Minimal segmentation and zoning allow for lateral movement, expanding the adversary’s reach into the enterprise.
- Aggressive redundancy: Traditional recovery results in aggressive data redundancy for critical systems. When malware is introduced, these costly backup capabilities accelerate the spread across environments.
- Limited business awareness: Leadership may still be operating under the assumption that the time, money and effort put into traditional disaster recovery programs are going to protect them in a destructive malware scenario. They need to be aware of the gaps and refocus efforts on these emerging threats.
“Understanding your organization’s attack surface, and what implications a destructive cyberattack may have are important, but what is critical is to avoid ‘analysis paralysis’ and move quickly on deploying the proper technical solutions, like the cyber recovery vault, educational tools and business strategies.
“Senior leadership and boards need to get a grasp of what their traditional disaster recovery plan provides, what it does not provide, and how an attack might play out.
“When boards are made aware of the risk, these capabilities are often prioritized and quickly implemented,” said Pete Renneker, technical resilience leader in cyber risk services and a managing director at Deloitte & Touche LLP
“Physical and traditional outages are often measured in hours or days. Whereas destructive attacks are often measured in weeks or months, which can be very difficult to recover from.
“To be successful, you have to have strong agile capabilities and leaders on the ground who can address the risks and interact effectively in the event of a large-scale incident,” said Kieran Norton, infrastructure security leader in cyber risk services and principal at Deloitte & Touche LLP
Building a comprehensive cyber approach
A viable cyber resiliency program expands the boundaries of traditional risk domains to include new capabilities like employee support services; out-of-band communication and collaboration tools; and a cyber recovery vault.
A cyber recovery vault is isolated on the network to limit lateral movement by a threat actor, secures the environment physically and logically, prevents deletion or destruction of critical data, and can be analyzed to accelerate identification of suspicious activity.
Given its design, the data sits in a cryogenically frozen state, meaning malware may enter the vault but will be unable to deliver its payload. This makes it possible to extract and cleanse affected data, recover critical systems, and restore the business as soon as possible.
With 26.3% of respondents reporting that their organization’s biggest challenge in implementing a cyber recovery vault is budget restrictions, organizations should consider focusing first on deploying a critical materials vault limited to protecting essential services.
This accelerates protection against these threats, reduces the initial spend, and enables the organization to analyze additional protection requirements in parallel.
Two years ago, Apple abandoned its plan to encrypt iPhone backups in the iCloud in such a way that makes it impossible for it (or law enforcement) to decrypt the contents, a Reuters report claimed on Tuesday.
Based on information received by multiple unnamed FBI and Apple sources, the report says that the decision was made after Apple shared its plan for end-to-end encrypted iCloud backups with the FBI and the FBI objected to it.
According to the sources, Apple:
- Didn’t want to be attacked for or be seen as protecting criminals
- Was convinced by the FBI’s arguments (i.e., that being able to access the contents of iPhone backups in the iCloud is crucial to the success of thousands of investigations)
- Didn’t want to get into another court battle with the FBI over the matter or getting used as an excuse for new legislation against encryption.
End-to-end encrypted iCloud backups are not available, but…
Apple and the FBI declined to comment on these claims. Also, more importantly, and despite how it might seem initially, “Reuters could not determine why exactly Apple dropped the plan.”
Whether the decision was made entirely or partly because of the FBI’s objections is, therefore, unknown. One of the Reuters sources – a former Apple employee – said it was possible the encryption project was dropped for other reasons (e.g., to prevent customers being locked out of their backups because they forgot their passphrase).
Daring Fireball publisher John Gruber pointed out the same thing, and said that he “would find it less surprising to know that Apple acquiesced to the FBI’s request not to allow encrypted iCloud backups than that Apple briefed the FBI about such a plan before it was put in place.”
If you want to keep your backups for your eyes only
Whether Apple has canceled its plan to offer encrypted iCloud backups for good or just temporarily, the fact that users need to be aware that some of the information they back up in the iCloud can be decrypted by Apple and, consequently, be made available to law enforcement.
The data that is encrypted end-to-end (i.e., is protected with a key derived from information unique to the user’s device and their device passcode) includes things like the iCloud Keychain (which includes all of user’s saved accounts and passwords), Wi-Fi passwords and payment information.
Data that is encrypted in transit and on the server, but with a key known to Apple, includes the device’s backup, Safari history and bookmarks, photos, calendars, contacts, voice memos, and more.
And, while Messages in iCloud does use end-to-end encryption, if the user has iCloud Backup turned on, their backup includes a copy of the key protecting their Messages (so they can recover them if they lose access to iCloud Keychain and their trusted devices). That means that law enforcement can access them also, if Apple allows it.
In short: if you use an iPhone and you want all of your data to remain private and encrypted in a way that makes is impossible (or very, very difficult) for anyone to decrypt it, don’t back it up into iCloud. Instead, opt for an encrypted local backup on a Mac or PC through iTunes, choose a strong passphrase, and make sure to remember it.