Even the world’s freest countries aren’t safe from internet censorship

The largest collection of public internet censorship data ever compiled shows that even citizens of what are considered the world’s freest countries aren’t safe from internet censorship.

internet censorship

A team from the University of Michigan used its own Censored Planet tool, an automated censorship tracking system launched in 2018, to collect more than 21 billion measurements over 20 months in 221 countries.

“We hope that the continued publication of Censored Planet data will enable researchers to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference,” said Roya Ensafi, U-M assistant professor of electrical engineering and computer science who led the development of the tool.

Poland blocked human rights sites, India same-sex dating sites

Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights.

They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan.

While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here.

“When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”

It’s already happening abroad, the researchers found.

“What we see from our study is that no country is completely free,” said Ram Sundara Raman, U-M doctoral candidate in computer science and engineering and first author of the study. “We’re seeing that many countries start with legislation that compels ISPs to block something that’s obviously bad like child pornography or pirated content.

“But once that blocking infrastructure is in place, governments can block any websites they choose, and it’s a very opaque process. That’s why censorship measurement is crucial, particularly continuous measurements that show trends over time.”

Norway, for example–tied with Finland and Sweden as the world’s freest country, according to Freedom House–passed laws requiring ISPs to block some gambling and pornography content beginning in early 2018.

Censored Planet, however, uncovered that ISPs in Norway are imposing what the study calls “extremely aggressive” blocking across a broader range of content, including human rights websites like Human Rights Watch and online dating sites like Match.com.

Similar tactics show up in other countries, often in the wake of large political events, social unrest or new laws. News sites like The Washington Post and The Wall Street Journal, for example, were aggressively blocked in Japan when Osaka hosted the G20 international economic summit in June 2019.

News, human rights and government sites saw a censorship spike in Poland after protests in July 2019, and same-sex dating sites were aggressively blocked in India after the country repealed laws against gay sex in September 2018.

Censored Planet releases technical details for researchers, activists

The researchers say the findings show the effectiveness of Censored Planet’s approach, which turns public internet servers into automated sentries that can monitor and report when access to websites is being blocked.

Running continuously, it takes billions of automated measurements and then uses a series of tools and filters to analyze the data and tease out trends.

The study also makes public technical details about the workings of Censored Planet that Raman says will make it easier for other researchers to draw insights from the project’s data, and help activists make more informed decisions about where to focus.

“It’s very important for people who work on circumvention to know exactly what’s being censored on which network and what method is being used,” Ensafi said. “That’s data that Censored Planet can provide, and tech experts can use it to devise circumventions.”

internet censorship

Censored Planet’s constant, automated monitoring is a departure from traditional approaches that rely on volunteers to collect data manually from inside countries.

Manual monitoring can be dangerous, as volunteers may face reprisals from governments. Its limited scope also means that efforts are often focused on countries already known for censorship, enabling nations that are perceived as freer to fly under the radar.

While censorship efforts generally start small, Raman says they could have big implications in a world that is increasingly dependent on the internet for essential communication needs.

“We imagine the internet as a global medium where anyone can access any resource, and it’s supposed to make communication easier, especially across international borders,” he said. “We find that if this continues, that won’t be true anymore. We fear this could lead to a future where every country has a completely different view of the internet.”

93% of businesses are worried about public cloud security

Bitglass released a report which uncovers whether organizations are properly equipped to defend themselves in the cloud. IT and security professionals were surveyed to understand their top security concerns and identify the actions that enterprises are taking to protect data in the cloud.

worried public cloud security

Orgs struggling to use cloud-based resources safely

93% of respondents were moderately to extremely concerned about the security of the public cloud. The report’s findings suggest that organizations are struggling to use cloud-based resources safely. For example, a mere 31% of organizations use cloud DLP, despite 66% citing data leakage as their top cloud security concern.

Similarly, organizations are unable to maintain visibility into file downloads (45%), file uploads (50%), DLP policy violations (50%), and external sharing (55%) in the cloud.

Many still using legacy tools

The report also found that many still try to use tools like firewalls (44%), network encryption (36%), and network monitoring (26%) to secure the use of the cloud–despite 82% of respondents recognizing that such legacy tools are poorly suited to do so and that they should instead use security capabilities designed for the cloud.

worried public cloud security

“To address modern cloud security needs, organizations should leverage multi-faceted security platforms that are capable of providing comprehensive and consistent security for any interaction between any device, app, web destination, on-premises resource, or infrastructure,” said Anurag Kahol, CTO at Bitglass.

“According to our research, 79% of organizations already believe it would be helpful to have such a consolidated security platform; now they just need to choose and implement the right one.”

Security teams need visibility into the threats targeting remote workers

Although only 33% of organizations are currently using a dedicated digital experience monitoring solution today, nearly half of IT leaders are now likely to invest in these solutions as a result of the events of 2020, a NetMotion survey reveals.

digital experience monitoring

Digital experience monitoring

In addition, the research revealed that tech leaders tend to overestimate the positive experience of remote workers – with IT estimating the quality of the remote working experience to be 21% higher than actual remote workers rated it.

“The past eight months have revealed fundamental blind spots in the way many IT teams have traditionally monitored the digital experiences of remote workers,” said Christopher Kenessey, CEO of NetMotion.

“Digital experience monitoring is emerging as the next crucial addition to IT’s toolbox in today’s remote working world, where IT no longer owns the networks that employees are using. Simply put, our research confirms that IT teams can’t fix what they can’t see.”

Remote work causing more technology issues, IT is hard-pressed to solve them

Since the beginning of COVID-19, nearly 75% of organizations have seen an increase in support tickets from remote workers, with 46% reporting a moderate increase and 29% reporting a large increase in workload, according to the survey. This extra burden is straining already stretched IT teams.

Further, from an IT, tools and technology perspective, 48% of workers prefer the experience of working in the office. That may be because IT has a harder time diagnosing employee tools and technology challenges outside of controlled office settings.

According to the survey, over 25% of IT teams admit struggling to diagnose the root cause of remote worker issues; and ensuring reliable network performance was cited as the top challenge for IT leaders surveyed, with 46% reporting the problem.

Joining these issues, IT leaders listed the following challenges encountered this year:

  • Software and application issues (43%)
  • Remote worker cybersecurity (43%)
  • Hardware performance and configuration (38%)

Strained IT-employee relationship

The survey also revealed that the new remote work dynamic may be straining the IT-employee relationship, with remote workers not fully trusting IT to provide the help they need.

While 45% of remote workers say their IT department values employee feedback, 26% of employees said they didn’t feel that their feedback would change anything, and 29% were undecided.

Furthermore, while 66% of remote workers reported encountering an IT issue while working remotely, many are not sharing their issues with IT. In fact, 58% of remote workers said that they had encountered IT issues while working remotely but did not share them with their IT team, and of the issues they reported to IT, only 46% were actually resolved.

“As everyone has gravitated towards a ‘work from anywhere’ status, IT teams have struggled to support employees. Workers are accessing a wider variety of resources from countless unknown networks, reducing visibility and making it exponentially more difficult for IT to diagnose the root cause of technology failures,” Kenessey said.

“Sadly, our research showed that nearly a quarter of remote workers would rather suffer in silence than engage tech teams. Without dedicated tools to monitor the experience of remote and mobile workers, IT teams are at a disadvantage when diagnosing and resolving technology challenges, and that’s putting greater strain on the IT-business relationship.”

Biometric device revenues to drop 22%, expected to rebound in 2021

In the aftermath of the COVID-19 pandemic, global biometric device revenues are expected to drop 22%, ($1.8 billion) to $6.6 billion, according to a report from ABI Research. The entire biometrics market, however, will regain momentum in 2021 and is expected to reach approximately $40 billion in total revenues by 2025.

biometric device revenues 2020

Global biometric device revenues in 2020

“The current decline in the biometrics market landscape stems from multifaceted challenges from a governmental, commercial, and technological nature,” explains Dimitris Pavlakis, Digital Security Industry Analyst.

“First, they have been instigated primarily due to economic reforms during the crisis which forced governments to constrain budgets and focus on damage control, personnel well-being, and operational efficiency.

“Governments had to delay or temporarily cancel many fingerprint-based applications related to user/citizen and patient registration, physical access control, on-premise workforce management, and certain applications in border control or civil, welfare, immigration, law enforcement, and correctional facilities.

“Second, commercial on-premise applications and access control suffered as the rise of the remote workers became the new norm for the first half of 2020. Lastly, hygiene concerns due to contact-based fingerprint technologies pummelled biometrics revenues forcing a sudden drop in fingerprint shipments worldwide.”

Not all is bleak, though

New use-case scenarios have emerged, and certain technological trends have risen to the top of the implementation lists. For example, enterprise mobility and logical access control using biometrics as part of multi-factor authentication (MFA) for remote workers.

“Current MFA applications for remote workers might well translate into permanent information technology security authentication measures in the long term,” says Pavlakis. “This will improve biometrics-as-a-service (BaaS) monetization and authentication models down the line.”

Biometrics applications can now look toward new implementation horizons, with market leaders and pioneering companies like Gemalto (Thales), IDEMIA, NEC, FPC, HID Global, and Cognitec at the forefront of innovation.

“Future smart city infrastructure investments will now factor in additional surveillance, real-time behavioral analytics, and face recognition for epidemiological research, monitoring, and emergency response endeavors,” Pavlakis concludes.

How important is monitoring in DevOps?

The importance of monitoring is often left out of discussions about DevOps, but a Gartner report shows how it can lead to superior customer experiences.

DevOps monitoring

The report provides the following key recommendations:

  • Work with DevOps teams during the design phase to add the instrumentation necessary to track business key performance indicators and monitor business metrics in production.
  • Automate the transmission of embedded monitoring results between monitoring and deployment tools to improve application deployments.
  • Use identified business requirements to develop a pipeline for delivering new functionality, and develop monitoring to a practice of continuous learning and feedback across stakeholders and product managers.

While the report focuses on application monitoring, the benefits of early DevOps integration apply equally to database monitoring, according to Grant Fritchey, Redgate DevOps Advocate and Microsoft Data Platform MVP: “In any DevOps pipeline, the database is often the pain point because you need to update it alongside the application while keeping data safe. Monitoring helps database developers identify and fix issues earlier, and minimizes errors when changes are deployed.”

Optimizing performance before releases hit production

Giving development teams access to live monitoring data during database development and testing, for example, can help them optimize performance before releases hit production. They can see immediately if their changes influence operational or performance issues, and drill down to the cause.

Similarly, database monitoring tools can be configured to read and report on deployments made to any server and automatically deliver an alert back to the development team if a problem arises, telling them what happened and how to fix the issue.

This continuous feedback loop not only reduces time spent manually checking for problems, but speeds up communication between database development and operational teams. Most importantly, this activity all takes place on non-production environments, meaning fewer bad customer experiences when accessing production data.

This increased focus on monitoring is prompting many high performing DevOps teams to introduce third-party tools which offer more advanced features like the ability to integrate with the most popular deployment, alerting and ticketing tools.

The advantages

A good example is the financial services sector. Redgate’s report revealed that 66% of businesses in the sector now use a third-party monitoring tool, outpacing all other sectors. And while 61% of businesses deploy database changes once a week or more, compared to 43% across other sectors, issues with deployments are detected faster and recovered from sooner.

The Gartner report states: “By enabling faster recognition and response to issues, monitoring improves system reliability and overall agility, which is a primary objective for new DevOps initiatives.”

Many organizations are discovering there are big advantages in including the database in the monitoring conversation as well.

Database monitoring improves DevOps success for financial services orgs

The financial services sector is outperforming other industries, both in its adoption of database DevOps, and its use of monitoring to track database performance and deployments, a newly released edition of Redgate’s 2020 State of Database Monitoring Report has revealed.

database monitoring

Respondents were surveyed in April 2020 while most were in lockdown due to COVID-19. Those responses form the foundation of the report and reveal the significant adoption of third-party database monitoring tools in financial services, which may reflect the ongoing situation where many disparate IT teams are working remotely. This has increased the need to monitor database environments, particularly when zero downtime is now expected – often demanded – in the sector.

Key findings

The report shows that 61% of those in financial services deploy database changes once a week or more, compared to 43% across other sectors, and 52% deploy multiple times per day or week, up from 35% in other sectors.

Server estates are also larger for financial services, with 36% having between 50 and 500 instances against 26% in other sectors. Notably, the biggest increase has been in estates with over 1,000 instances, which are up eight percentage points year-on-year.

These results have likely contributed to the 66% of companies in financial services reporting that they use a paid-for monitoring tool, compared to only 39% of respondents across other sectors.

To further complicate the picture, the cloud is changing the nature of those estates. 39 percent of those in financial services already host some or all of their databases in the cloud, and the report shows that migrating to and integrating with the cloud is the biggest challenge facing the sector in the next 12 months.

Yet, despite the far higher rate of database deployments and bigger, more mixed estates to manage, failed deployments are detected earlier and recovered from faster. 49 percent of failed deployments are detected within 10 minutes and 32% recover from those failed deployments in 10 minutes or under. In other sectors this falls to 39% and 24%, respectively.

For Grant Fritchey, Microsoft Data Platform MVP and Redgate Advocate, this is where the real value of advanced, third-party monitoring tools lies. “With faster deployments and large, hybrid estates, it’s no longer enough to monitor the usual suspects like CPU, disk space, memory and I/O capacity,” says Fritchey.

“Sectors like financial services – and Healthcare and IT – have recognized they need customizable alerts for the operational and performance issues they face, and every deployment displayed on a timeline alongside key SQL Server metrics. That way, when a bad deployment occurs, they can dive into the details, investigate the cause and remedy it immediately. If you can’t do that, frankly, you’ll have a hard time doing DevOps.”

Most security pros are concerned about human error exposing cloud data

A number of organizations face shortcomings in monitoring and securing their cloud environments, according to a Tripwire survey of 310 security professionals.

exposing cloud data

76% of security professionals state they have difficulty maintaining security configurations in the cloud, and 37% said their risk management capabilities in the cloud are worse compared with other parts of their environment. 93% are concerned about human error accidentally exposing their cloud data.

Few orgs assessing overall cloud security posture in real time

Attackers are known to run automated searches to find sensitive data exposed in the cloud, making it critical for organizations to monitor their cloud security posture on a recurring basis and fix issues immediately.

However, the report found that only 21% of organizations assess their overall cloud security posture in real time or near real time. While 21% said they conduct weekly evaluations, 58% do so only monthly or less frequently. Despite widespread worry about human errors, 22% still assess their cloud security posture manually.

“Security teams are dealing with much more complex environments, and it can be extremely difficult to stay on top of the growing cloud footprint without having the right strategy and resources in place,” said Tim Erlin, VP of product management and strategy at Tripwire.

“Fortunately, there are well-established frameworks, such as CIS benchmarks, which provide prioritized recommendations for securing the cloud. However, the ongoing work of maintaining proper security controls often goes undone or puts too much strain on resources, leading to human error.”

OPIS

Utilizing a framework to secure the cloud

Most organizations utilize a framework for securing their cloud environments – CIS and NIST being two of the most popular – but only 22% said they are able to maintain continuous cloud security compliance over time.

While 91% of organizations have implemented some level of automated enforcement in the cloud, 92% still want to increase their level of automated enforcement.

Additional survey findings show that automation levels varied across cloud security best practices:

  • Only 51% have automated solutions that ensure proper encryption settings are enabled for databases or storage buckets.
  • 45% automatically assess new cloud assets as they are added to the environment.
  • 51% have automated alerts with context for suspicious behavior.

CIOs are apprehensive about interruptions due to expired machine identities

TLS certificates act as machine identities, safeguarding the flow of sensitive data to trusted machines. With the acceleration of digital transformation, the number of machine identities is skyrocketing.

expired machine identities

At the same time, cybercriminals are targeting machine identities, including TLS keys and certificates, and their capabilities, such as the encrypted traffic they enable, to use in attacks, according to Venafi.

The study evaluated the opinions of 550 CIOs from the United States, United Kingdom, France, Germany and Australia.

Compromised machine identities can have a major financial impact. A recent AIR Worldwide study estimated that between $51 billion to $72 billion in losses to the global economy could be eliminated through the proper protection of machine identities.

Key findings

  • 75% of global CIOs expressed concern about the security risks connected with the proliferation of TLS machine identities.
  • 56% of CIOs said they worry about outages and business interruptions due to expired certificates.
  • 97% of CIOs estimated that the number of TLS machine identities used by their organization would increase at least 10–20% over the next year.
  • 93% of respondents estimated that they had a minimum of 10,000 active TLS certificates by their organizations; 40% say they have more than 50,000 TLS certificates in use.

“According to a Venafi survey from 2018, once IT professionals deployed a comprehensive machine identity protection solution, they typically found 57,000 TLS machine identities that they did not know they had in their businesses and cloud,” said Kevin Bocek, vice president of security strategy and threat intelligence at Venafi.

“This study indicates that many CIOs are likely significantly underestimating the number of TLS machine identities currently in use. As a result, they are unaware of the size of the attack surface and the operational risks that these unknown machine identities bring to their organization. Whether it’s debilitating outages from expired certificates, or attackers hiding in encrypted traffic for extended periods of time, risks abound.

“The only way to eliminate these risks is to discover, continuously monitor and automate the lifecycle of all TLS certificates across the entire enterprise network—and this includes short lived certificates that are used in the cloud, virtual and DevOps environments.”

With remote working on the rise, infosec strategies need to evolve

The recent pandemic created a new normal that redefines the way business operates by eliminating security and physical work borders. An Avertium study found that having employees work from home during the pandemic saved U.S. employers more than $30 billion per day.

remote working infosec strategies

The study also predicts that 25-30% of the workforce will be working from home for multiple days per week by the end of 2021. For IT Security teams, this poses many new challenges.

“As we move forward with increasingly complex and fragmented business models, it’s crucial to fully assess and protect business assets from new and emerging cybercrimes,” says Paul Caiazzo, senior vice president, security and compliance at Avertium.

“The goal is to prevent a wide array of online threats and attacks, including data breaches, ransomware attacks, identity theft, hacking at home, business, cloud and hybrid cloud locations and online predators. Work with cybersecurity professionals who understand the increased threats in our new, post-COVID world, and can increase security to mitigate risk.”

Organizations losing visibility into their business network traffic

Many organizations’ security monitoring infrastructure is based upon the assumption that most employees are connected directly to the corporate LAN. By collecting data from Active Directory domain controllers, the perimeter firewall, server and workstation event logs, endpoint protection logs and other key on-premises based data sources an organization can maintain a high level of visibility into activity within their network.

But since many employees have moved outside of the network perimeter, whether by using mobile devices or working from a home or remote environment organizations have lost visibility into a large percentage of their business network traffic.

Cybercriminals have pounced on the chance to leverage the resulting distraction for their own gain by turning up the volume of their efforts. Bad actors have recently made news by stealing personal data from unemployment benefit applicants in several states, waging ongoing COVID-19-themed phishing campaigns, and creating a 238% surge in cyberattacks against banks.

With so much at stake, it’s important to establish ways of monitoring telework security in a world with disappearing network perimeters.

Telework redefines the network perimeter

With a fully remote workforce, many organizations have been forced to make choices between usability and security. Existing VPN infrastructure was not designed to support a fully remote workforce.

Adoption of split-tunnel VPNs has been widely recommended as a solution to the VPN scalability problem. However, while allowing Internet-bound traffic to flow directly to its destination, instead of over the corporate VPN, increases usability, it does so at the cost of security and network visibility.

Cybercriminals are capitalizing on this opportunity. The United States Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) recently issued a joint alert noting an increase in cyberattacks exploiting VPN vulnerabilities.

With unmonitored connections to the public Internet, a remote workforce’s laptops can become compromised by malware or a cybercriminal without detection. These devices can then be used as a stepping stone to access the corporate environment via their VPN connection. For a remote workforce, employee devices and home networks are the new corporate network edge.

Securing the endpoint from the cloud

With the network perimeter shifted to teleworkers’ devices, securing the enterprise requires shifting security to these devices as well. Organizations require at least the same level of visibility into activity as they have on the corporate network.

By deploying agents onto the corporate-owned devices used by teleworkers, an organization can implement endpoint detection and response beyond the confines of the corporate network. This includes the ability to prevent and detect malware, viruses, ransomware, and other threats based upon signature analysis and behavioral analysis of potentially malicious processes.

However, an organization also requires centralized visibility into the devices of their remote workforce. For this purpose, a centrally-managed cloud-based solution is the ideal choice.

By moving security to the cloud, an enterprise reduces load on the corporate network and VPN infrastructure, especially in a split-tunnel connectivity architecture. Cloud-based monitoring and threat management also can achieve a higher level of scalability and performance than an on-premises solution.

A cloud-based zero trust platform can also act as an access broker to resources both on the public internet and the corporate private network.

Zero trust agents installed on telecommuters’ devices can securely and dynamically route all traffic to a cloud-based gateway and then on to the target resource in a way that provides the same or better control and visibility than even a well-configured traditional full tunnel VPN solution. By uniquely identifying the use, device and context, zero trust provides fine-grained precision on access control for the enterprise.

Data from the cloud-based ZTN gateway can additionally be used to perform behavioral analytics within a cloud-based SIEM platform, enhancing security visibility above and beyond traditional networking approaches.

Ensuring employee privacy while monitoring telework security

Monitoring telework security can be a thorny issue for an organization from a privacy and security perspective. On the one side, an organization requires the ability to secure the sensitive data used by employees for daily work in order to meet regulatory requirements. However, deploying network monitoring solutions at employees’ homes presents significant privacy issues.

An agent-based solution, supported by cloud-based infrastructure, provides a workable solution to both issues. For corporate-owned devices, company policy should have an explicit consent to monitor clause, which enables the organization to monitor activity on company devices.

Agents installed on these devices enable an organization to exercise these rights without inappropriately monitoring employee network activity on personal devices connected to the same home network.

Monitoring BYOD security

For personal devices used for remote work under a BYOD policy, the line between privacy and security becomes blurrier. Since devices are owned by the employee, it may seem more difficult to enforce installation of the software agent, and these dual-use devices may cause inadvertent corporate monitoring of personal traffic.

All organizations employing a BYOD model should document in policy the requirements for usage of personally owned devices, including cloud-based anti-malware and endpoint detection and response tools as described earlier.

The most secure way to enable BYOD is a combination of corporately managed cloud-based anti-malware/EDR, supplemented by a ZTN architecture. In such a model, traffic bound for public internet resources can be passed along to the destination without interference, but malicious activity can still be detected and prevented.

How businesses are adapting IT strategies to meet the demands of today

Businesses are are adapting IT strategies, reprioritizing cloud adoption and automated database monitoring due to the effects of a global lockdown, remote working and a focus on business continuity, according to Redgate.

adapting IT strategies

The report, which surveyed nearly 1,000 respondents in April 2020, reveals that while performance monitoring and backups remain the most common responsibilities for database professionals, managing security and user permissions have leapt to third and fourth place, respectively.

However, there seems to be a learning curve. As database professionals adopt these new roles, respondents say that staffing and recruitment is the second biggest challenge in managing estates.

Additionally, the two biggest causes of problems with database management come from human error (23%) and ad hoc user access (18%), which could be a result of increased remote working as tasks become more widely distributed.

Increase in the use of cloud-based platforms

In support of remote teams, respondents reported a rapid increase in the use of cloud-based platforms, particularly Microsoft Azure, which is up 15 percentage points in the last year.

With many businesses like Twitter announcing that remote working will become business-as-usual in the future, the report highlights why effective, reliable monitoring of database estates is critical to business longevity.

Perhaps as a consequence, only 18% of respondents continue to monitor their estates manually, and for those who are managing 50 instances or more, the number using a monitoring tool rises to 90%.

Cloud migration and monitoring are the biggest challenges

Microsoft Azure remains the most used cloud platform, with 20% of respondents using it frequently, and a further 34% using it occasionally, but migrating to the cloud can be difficult, and doing so with a distributed team doesn’t make things easier.

Estates are growing

Organizations with fewer than 100 instances have dropped for a second year, those with over 100 instances have grown – and estates with over 1,000 instances grew by nine percentage points.

Monitoring is key to Database DevOps success

Third-party monitoring tools reduce Mean Time To Detection (MTTD) of deployment issues by 28%, and Mean Time To Recovery (MTTR) by 22%.

adapting IT strategies

Satisfaction with monitoring tools is at an all-time high

68% of respondents say they are happy with their third-party monitoring tools, up seven percentage points on 2019, which may reflect the increased reliance on using such tools to monitor estates remotely.

SQL Server remains the most popular database platform

SQL Server is used by 81% of respondents, followed by MySQL at 33%, Oracle at 29%, and PostgreSQL at 21% (multiple platforms are often in use and respondents could choose more than one platform).

As Grant Fritchey, author and co-author of several books on SQL Server and a DevOps Advocate for Redgate, comments: “While our research focused on the need for database monitoring, the issues it uncovered are practically universal given the current business environment.

“For example, we know that recruitment may be challenging for many, and there is a renewed desire to adopt technologies like the cloud, while still improving performance. And with the uncertainty ahead, we could see lasting changes for years to come.”

5 questions about website and brand security every business owner should ask

Your website is the primary way your customers interact with your enterprise. You envision and create a website to:

  • Enhance customer engagement and conversion of visitors to customers.
  • Optimize revenue per customer.
  • Create repeat customers.
  • Retain customers, i.e., avoid customer attrition and abandonment.

Adding security to the overall business strategy should initiate the following questions to ensure you are making informed decisions for the safety of your brand and your customers.

1. What scripts are running right now on my website?

What services and scripts are you utilizing to optimize your website? Going a step beyond that, what scripts are running on your website?

There are thousands of third-party website scripts marketing teams routinely employ to achieve these goals. They include analytics, trackers, live or virtual customer engagement, social media scripts, and site monetization through advertising – just to name a few. New and innovative website scripts are constantly being released and those enterprises that best leverage them are at an advantage relative to their peers and competitors.

However, your security department limits your usage of these powerful scripts by:

  • Limiting how many third party scripts you use on your website.
  • Restricting your usage to mature tools and scripts and limiting your usage of newer, more innovative ones.
  • Preventing your usage of third-party scripts in your most impactful (but also sensitive) areas of your website.

Although these limitations were once put in place for good reason, they are absolutely constraining your ability to achieve the goal of maximizing business performance through optimization of your website capabilities.

2. Am I being consulted every time a new script is being added to our website?

If you don’t think you need to be consulted, then what are the precautionary steps to ensure there is a protocol in place for checks and balances for your website security? Depending on how small or large your organization is, you may not have a daily digest into the inner happenings of your team.

The security team may be actively monitoring third-party scripts, which is a great first step in client-side website protection. However, a loophole that many people forget is how website owners are addressing fourth- and fifth-party scripts that the approved third-party scripts bring to your website.

3. Are we protecting our customers and their data?

Due to the lack of permissions that govern and limit the access and behavior of third- party website scripts, those third parties and the hackers that seek to compromise them have unrestricted access to nearly every aspect of the webpage including customer data that is displayed on the page or entered by the customer.

This includes usernames, passwords, personally identifiable information, payment information, and other sensitive and regulated data. In fact, beyond this ability to access this information, the unrestricted access granted to these third-party scripts might enable hackers to exploit them to:

  • Record all customer keystrokes and data.
  • Manipulate webpage form-fields to dupe customers into revealing unnecessary and sensitive information to unauthorized third parties and/or hackers.
  • Inject popup boxes that request unnecessary and sensitive information from the customer.
  • Hijack the users’ mouse clicks and automatically redirect them to unauthorized external websites where customer information is phished and stolen.

4. What regulations should I be paying attention to? Are they releasing any information on new attack vectors?

Is HIPAA, PCI, GDPR or CCPA something your organization adheres to? The Internet has significantly extended an organization’s security perimeter, since enabling and enriching a website allows attackers to exploit the fact that the attack surface extends across the entire Internet.

GDPR, HIPAA and PCI are only a few of the regulations set up to ensure companies (and individuals) are protecting the customer/consumer.
New attacks have a way of skirting around existing security measures. A simple Google search can give you the answer on new and up-and-coming attack vectors and if organizations are actively preventing them.

5. Could my organization be the next victim?

Are your competitors or similar companies in your field being targeted? Attackers such as the Magecart groups are known for going after eCommerce companies. That being said, similar industries utilize similar tools and scripts within their space. Those similar scripts can then prove easy for a hacker to move from one site to the next checking any crossover to see if there are potential areas of already known vulnerabilities on each website.

Just because it hasn’t happened, doesn’t mean you are immune to it. Setting up precautions is truly the only way to ensure you are protected and can control all of the elements on your website.

In summary, it’s important to ensure you or at least your team holds all the cards. If you aren’t sure where to start, just ask for an analysis on the third-party scripts running on your website and see if there is anything that surprises you in the results.

Most find data security challenging with respect to UCaaS/CCaaS deployments

Security and network services are the top challenges for enterprises deploying or considering UCaaS and CCaaS technologies, and decision makers prefer bundled solutions that add security features, a software-defined network, and 24/7 performance monitoring, according to Masergy.

UCaaS CCaaS deployments

The study analyzed responses from IT decision makers at global enterprises that are evaluating, planning to implement or have implemented UCaaS or CCaaS.

Findings revealed that data security and network performance are the top two areas that IT focuses on to ensure their UCaaS and CCaaS solutions are successfully delivering on business goals.

Moreover, integrated solutions from a single provider take precedence, because respondents say they result in easier implementation and management with better visibility and fewer integration issues.

90+% prefer a bundled SD-network and want 24/7 monitoring

  • Most (93 percent) say it’s highly important that their UCaaS/CCaaS solutions come bundled with network services in a single, seamless approach.
  • When considering network service to support UCaaS/CCaaS, 90 percent of respondents rate a fully managed service with 24/7 monitoring and a software-defined network (SD-WAN) as highly important criteria.

70% say security is a challenge for UCaaS CCaaS deployments

  • Seven in ten (70 percent) find data security challenging with respect to UCaaS and CCaaS deployments.
  • In fact, 93 percent find it highly important (46 percent “critical”) that security features and services are bundled with their UCaaS/CCaaS solutions.

Flexibility drives investment and buyers prioritize simplicity

  • Increased IT flexibility is the top driver (40 percent) of a UCaaS or CCaaS investment.
  • More than half of respondents (51 percent) prefer an integrated UCaaS/CCaaS solution — one that includes network services from a single provider.

“The maturity of UCaaS and CCaaS has today’s decision makers less worried about technology features and more concerned about secure application performance across the network and the cloud,” said Terry Traina, chief digital officer, Masergy.

Researchers use AI and create early warning system to identify disinformation online

Researchers at the University of Notre Dame are using artificial intelligence to develop an early warning system that will identify manipulated images, deepfake videos and disinformation online.

identify disinformation online

The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections.

Identify disinformation online: How does it work?

The scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks.

“Memes are easy to create and even easier to share,” said Tim Weninger, associate professor in the Department of Computer Science and Engineering at Notre Dame. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”

Weninger, along with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, and members of the research team collected more than two million images and content from various sources on Twitter and Instagram related to the 2019 general election in Indonesia.

The results of that election, in which the left-leaning, centrist incumbent garnered a majority vote over the conservative, populist candidate, sparked a wave of violent protests that left eight people dead and hundreds injured. Their study found both spontaneous and coordinated campaigns with the intent to influence the election and incite violence.

Those campaigns consisted of manipulated images exhibiting false claims and misrepresentation of incidents, logos belonging to legitimate news sources being used on fabricated news stories and memes created with the intent to provoke citizens and supporters of both parties.

While the ramifications of such campaigns were evident in the case of the Indonesian general election, the threat to democratic elections in the West already exists. The research team said they are developing the system to flag manipulated content to prevent violence, and to warn journalists or election monitors of potential threats in real time.

Providing users with tailored options for monitoring content

The system, which is in the research and development phase, would be scalable to provide users with tailored options for monitoring content. While many challenges remain, such as determining an optimal means of scaling up data ingestion and processing for quick turnaround, Scheirer said the system is currently being evaluated for transition to operational use.

Development is not too far behind when it comes to the possibility of monitoring the 2020 general election in the United States, he said, and their team is already collecting relevant data.

“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another – saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”

Increasing number of false positives causing risk of alert fatigue

More than two-fifths (43%) of organizations experience false positive alerts in more than 20% of cases, while 15% reported more than half of their security alerts are false positives. On average, respondents indicated 26% of alerts fielded by their organization are false positives, a Neustar repot reveals.

alert fatigue

In response to growing cybersecurity threats, enterprises are investing significant resources in network monitoring and threat intelligence technologies that create more alerts – and more false positives – for security teams.

Security tools contributing to data overload and alert fatigue

The survey found two-fifths (39%) of organizations have seven or more tools in place that generate security alerts, and 21% reported using more than ten.

“Security tools that simply produce large quantities of data to be analyzed, without contextualizing potential threats, are contributing to data overload, alert fatigue and burnout,” said Rodney Joffe, chairman of NISC and SVP and Fellow at Neustar.

Cybersecurity teams are increasingly drowning in data and are overwhelmed by the massive volume of alerts, many of them false positives. To ensure these high-value employees in mission critical roles are well-equipped to separate the signal from the noise, enterprises need a curated approach to security data that provides timely, actionable insights that are hyper relevant to their own organization and industry.”

Threats continuing their upward trajectory

The report indicates that threats are continuing their steady upward trajectory across vectors. The International Cyber Benchmarks Index, which reflects the overall state of the cybersecurity landscape, reached a new high of 29.8 in January 2020.

In November–December 2019, the surveyed security professionals ranked distributed denial of service attacks as their greatest concern (22%), followed by system compromise (20%) and ransomware and intellectual property theft (both 17%).

During the same period, social engineering via email was most likely to be perceived as an increasing threat to organizations (59%), followed by DDoS attacks (58%) and ransomware (56%).

Tech pros should consider modern APM tools to gain insight across the entire application stack

While application performance management (APM) has become mainstream with a majority of tech pros using APM tools regularly, there’s work to be done to move beyond troubleshooting, according to SolarWinds.

APM tools

The opportunity for tech pros lies in fully leveraging the benefits of APM across the entire application stack, so they can better communicate results to the organizations they serve.

Nearly nine in 10 tech pros use APM tools in their environments, whether on-premises, hybrid, or in the cloud. However, respondents report their highest confidence area in managing and monitoring applications is troubleshooting.

This is consistent with last year’s findings, in which nearly half of respondents said troubleshooting was a top three task they managed daily. To move beyond troubleshooting, tech pros cite a need for more training and education on which APM solutions best suit their environments.

How to maximize the value of APM solutions and strategies

According to the survey, tech pros also report the need to develop skills in tracking APM impact across key business metrics to maximize the value of their APM solutions and strategies.

“The Cloud Confessions results show that while APM has finally hit mainstream, it’s largely misunderstood and therefore underutilized. This isn’t surprising considering APM has typically been siloed across DevOps and Operations teams without a holistic view of the application code, supporting infrastructure, and end-user experience,” said Jim Hansen, vice president of products, application management, SolarWinds.

“To move beyond simply reactive troubleshooting, tech pros should consider modern APM tools as the keystone to connecting these previously siloed functions to gain comprehensive insight across the entire application stack.

“When tech pros achieve this level of proactive optimization with their APM tools, they’ll feel more empowered in their roles, in collaborating across teams, and in communicating results to the business at large.”

“The findings also underscore our belief that APM tools should be simple, powerful, and affordable, enabling tech pros at any stage in their APM journey to realize the value and richness of an APM strategy,” added Hansen.

Confusion around which tools are ideal for specific IT environments

Tech pros are using APM tools, employing a nearly even mix of SaaS and on-premises to support the three architectures most often found in modern environments. Despite this, confusion around which tools are ideal for specific IT environments is consistent across application owners, developers, and support team roles.

Nearly nine in 10 tech professionals are using APM tools in their environments.

  • 59% are using APM for monolithic (traditional on-prem) app development architectures
  • 40% are using APM for N-tier service-oriented architectures
  • 39% are using APM for microservices

The top three most commonly deployed tools in support of APM strategies are:

  • Database monitoring (64%)
  • Application monitoring (63%)
  • Infrastructure monitoring (61%)

Two-fifths of tech pros face challenges due to lack of awareness of what APM solutions are currently offered and confusion over which currently offered APM solutions are best for their needs (respectively).

Confidence among tech pros is high

Overall, tech pros are confident in their ability to manage and monitor applications on-prem, in hybrid environments, and in the cloud; this confidence mostly sits within their ability to troubleshoot.

  • Over eight in 10 (84%) respondents are confident in their ability to successfully manage application and infrastructure performance.
  • Two-fifths (40%) of tech pros surveyed are most confident troubleshooting application issues and monitoring application availability and performance (respectively) given their existing skillset, followed by one-third (32%) of tech pros confident in collaborating with team members.
  • Troubleshooting and monitoring as the top two areas where tech pros have the most confidence is consistent with last year’s findings—in 2019, troubleshooting app issues was the number one activity tech pros spent their time on, with 48% of respondents choosing this as a top three task.

The challenges

The largest challenges tech pros face when monitoring and managing application and infrastructure performance relate to an existing knowledge and skills-gap. As a result, tech pros have continued to deal with the troubles of troubleshooting, despite nearly all using some type of APM tool in the last 12 months.

When ranking the challenges, tech pros said:

  • Lack of training for personnel was the top challenge (57%), followed by lack of awareness of what APM solutions are currently offered (44%) and confusion over which currently-offered APM solutions are best for our needs (42%).
  • All other challenges were at, or under, the 30% rate.

Nearly eight in 10 (78%) tech pros report spending less than 10% of their time proactively optimizing their environments (vs. reactively maintaining). In 2019, 77% of respondents reported spending the same amount of time on proactive optimization.

Greater skills development is needed

Tech pros value the business insights delivered from APM tools, but greater skills development is needed in establishing KPIs and communicating IT performance to the business.

The top three business insights tech pros gain from APM tools include:

  • Ability to prevent applications outages (73%)
  • Ability to prevent app slowdown related to performance and/or capacity (63%)
  • Ability to improve user/customer experience (62%)

Tech pros are collecting these business metrics, but there’s a need to bridge the gap between business metrics collected and tech pros’ confidence in their ability to communicate performance to the business.

34% of tech pros feel they need to improve their current skillset/ability to track impact across key business metrics in order to more confidently manage their organization’s IT environment, followed by 30% of tech pros who feel they need to improve their current skillset/ability to troubleshoot application issues, improve the performance of application code (29%), and manage/ensure/improve end-user performance (29%) (respectively).

The findings of this report are based on a survey fielded in November 2019, which yielded responses from 317 application owners, developers, and support team professionals (practitioner, manager, and director roles) in the U.S. and Canada from public- and private-sector small, mid-size, and enterprise organizations. Respondents include 101 application owners, 108 developers, and 108 support team technology professionals.

CIOs using AI to bridge gap between IT resources and cloud complexity

There’s a widening gap between IT resources and the demands of managing the increasing scale and complexity of enterprise cloud ecosystems, a Dynatrace survey of 800 CIOs reveals.

enterprise cloud gap

IT leaders around the world are concerned about their ability to support the business effectively, as traditional monitoring solutions and custom-built approaches drown their teams in data and alerts that offer more questions than answers.

CIO responses in the research indicate that, on average, IT and cloud operations teams receive nearly 3,000 alerts from their monitoring and management tools each day. With such a high volume of alerts, the average IT team spends 15% of its total available time trying to identify which alerts need to be focused on and which are irrelevant.

This costs organizations an average of $1.5 million in overhead expense each year. As a result, CIOs are increasingly looking to AI and automation as they seek to maintain control and close the gap between constrained IT resources and the rising scale and complexity of the enterprise cloud.

Enterprise cloud gap: IT is drowning in data

Traditional monitoring tools were not designed to handle the volume, velocity and variety of data generated by applications running in dynamic, web-scale enterprise clouds. These tools are often siloed and lack the broader context of events taking place across the entire technology stack.

As a result, they bombard IT and cloud operations teams with hundreds, if not thousands, of alerts every day. IT is drowning in data as incremental improvements to monitoring tools fail to make a difference.

  • On average, IT and cloud operations teams receive 2,973 alerts from their monitoring and management tools each day, a 19% increase in the last 12 months.
  • 70% of CIOs say their organization is struggling to cope with the number of alerts from monitoring and management tools.
  • 75% of organizations say most of the alerts from monitoring and management tools are irrelevant.
  • On average, just 26% of the alerts organizations receive each day require actioning.

enterprise cloud gap

Existing systems provide more questions than answers

Traditional monitoring tools only provide data on a narrow selection of components from the technology stack. This forces IT teams to manually integrate and correlate alerts to filter out duplicates and false positives before manually identifying the underlying root cause of issues.

As a result, IT teams’ ability to support the business and customers are greatly reduced as they’re faced with more questions than answers.

  • On average, IT teams spend 15% of their time trying to identify which alerts they need to focus on, and which are irrelevant.
  • The time IT teams spend trying to identify which alerts need to be focused on and which are irrelevant costs organizations, on average, $1,530,000 each year.
  • The excessive volume of alerts causes 70% of IT teams to experience problems that should have been prevented.

Precise, explainable AI provides relief

Organizations need a radically different approach – an answers-based approach to monitoring, to keep up with the transformation that’s taken place in their IT environments, and an approach with AI and automation at the core.