IT leaders on 2021 opportunities, challenges and key technology trends

IEEE released the results of a survey of CIOs and CTOs in the U.S., U.K., China, India and Brazil regarding the most important technologies for 2021 overall, the impact of the COVID-19 pandemic on the speed of their technology adoption and the industries expected to be most impacted by technology in the year ahead.

2021 technology trends

2021 most important technologies and challenges

Which will be the most important technologies in 2021? Among total respondents, 32% say AI and machine learning, followed by 5G (20%) and IoT (14%).

Manufacturing (19%), healthcare (18%), financial services (15%) and education (13%) are the industries that most believe will be impacted by technology in 2021, according to CIOs and CTOS surveyed.

At the same time, 52% of CIOs and CTOs see their biggest challenge in 2021 as dealing with aspects of COVID-19 recovery in relation to business operations. These challenges include a permanent hybrid remote and office work structure (22%), office and facilities reopenings and return (17%), and managing permanent remote working (13%).

However, 11% said the agility to stop and start IT initiatives as this unpredictable environment continues will be their biggest challenge. Another 11% cited online security threats, including those related to remote workers, as the biggest challenge they see in 2021.

Technology adoption, acceleration and disaster preparedness due to COVID-19

CIOs and CTOs surveyed have sped up adopting some technologies due to the pandemic:

  • 55% of respondents have accelerated adoption of cloud computing
  • 52% have accelerated 5G adoption
  • 51% have accelerated AI and machine learning

The adoption of IoT (42%), augmented and virtual reality (35%) and video conferencing (35%) technologies have also been accelerated due to the global pandemic.

Compared to a year ago, 92% of CIOs and CTOs believe their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. What’s more, of those who say they are better prepared, 58% strongly agree that COVID-19 accelerated their preparedness.

When asked which technologies will have the greatest impact on global COVID-19 recovery, 25% of those surveyed said AI and machine learning.

Cybersecurity

The top two concerns for CIOs and CTOs when it comes to the cybersecurity of their organization are security issues related to the mobile workforce including employees bringing their own devices to work (37%) and ensuring the IoT is secure (35%). This is not surprising, since the number of connected devices such as smartphones, tablets, sensors, robots and drones is increasing dramatically.

34% of CIO and CTO respondents said they can track and manage 26-50% of devices connected to their business, while 20% of those surveyed said they could track and manage 51-75% of connected devices.

Attacks are rising in all vectors and types

DDoS, web application, bot, and other attacks have surged exponentially compared to the first half of 2019, according to CDNetworks.

attacks are rising

In particular, attacks on web applications rose by 800%. These alarming statistics show that enterprises are experiencing challenging times in their attempts to defend against cyber attacks and protect their online assets.

Hackers extremely sensitive to industry transformation

The report goes on to say that hackers are extremely sensitive to industry transformation. For this reason, the challenges of the global pandemic are leading hackers to move attacks from less visited sites such as those related to hospitality, transportation, and other travel-related businesses and redirect their attention to sites that are profiting under COVID-19 such as media, public services, and education.

E-government and digital public service systems are also magnets to hackers due to the sensitive and valuable information these systems hold. Researchers contend that attacks against public sectors will continue with increasing virulence.

All types of attacks continued to increase. Consider that:

  • DDoS attack incidents saw a 147.63% year-on-year growth.
  • On average, 660 bot attack incidents were blocked every second, a number that is nearly doubled from last year.
  • Over 4.2 billion web application attacks were blocked in H1, a figure that is 8 times higher than the same period in 2019.

It is also worth noting that web application attacks in the public sector surpassed attacks in retail venues, making the public sector the single most attacked industry during this period. In fact, over 1 billion of the web attacks were targeted toward the public sector, which accounts for 26% of total attacks.

Equally disturbing is the fact that with AI becoming a vital part of cybersecurity, hackers are now using machine learning to detect and crack vulnerabilities in networks and systems.

Attacks rising in all vectors and types

The report makes it clear that attacks are rising in all vectors and types year over year. As new web application methodologies, from network security to cloud security, expose new attack surfaces, the boundary of security protection continues to expand with them. As a result, today’s APIs, micro-services, and serverless functions are all vulnerable to malformed requests, bot traffic, and DDoS attacks at both network and application layers.

Moreover, the evolution of 5G networks, edge computing, AI, and Internet of Things is rapidly forcing conventional security into the dustbin. In its place, software-defined security is emerging as a significant trend in the development of network security.

Enterprises that have an online presence and care about compliance, user privacy, security, and online availability can no longer enjoy the luxury of cherry-picking their security services because conventional security devices and strategies are becoming inadequate for handling today’s challenges. Rather, they must act immediately to adopt a comprehensive website security suite that includes a web application firewall, bot management solution, and DDoS protection.

Intelligent confrontation will be the new battlefield for cloud security in the near future. To minimize the exposure window, the time has come to fundamentally rethink strategy and embrace a layered defense to gain a tactical edge and achieve superiority on the battlefield in both conventional conflicts and asymmetric cyber warfare.

The AI in cybersecurity market to generate $101.8 billion in 2030

The AI in cybersecurity market is projected to generate a revenue of $101.8 billion in 2030, increasing from $8.6 billion in 2019, progressing at a 25.7% CAGR during 2020-2030, ResearchAndMarkets reveals.

AI in cybersecurity market

The market is categorized into threat intelligence, fraud detection/anti-fraud, security and vulnerability management, data loss prevention (DLP), identity and access management, intrusion detection/prevention system, antivirus/antimalware, unified threat management, and risk & compliance management, on the basis of application. The DLP category is expected to advance at the fastest pace during the forecast period.

Malicious attacks and cyber frauds growing rapidly

The number of malicious attacks and cyber frauds have risen considerably across the globe, which can be attributed to the surging penetration on internet and increasing utilization of cloud solutions.

Cyber fraud, including payment and identity card theft, account for more than 55% of all cybercrime and lead to major losses for organizations, if they are not mitigated. Owing to this, businesses these days are adopting advanced solutions for dealing with cybercrime in an efficient way.

This is further resulting in the growth of the global AI in cybersecurity market. AI-based solutions are capable of combating cyber frauds by reducing response time, identifying threats, refining techniques for distinguishing attacks that need immediate attention.

The number of cyber-attacks has also been growing because of the surging adoption of the BYOD policy all over the world. It has been observed that the policy aids in increasing productivity and further enhances employee satisfaction.

That being said, it also makes important company information and data vulnerable to cyber-attacks. Devices of employees have wide-ranging capabilities and IT departments are often not able to fully quality, evaluate, and approve each and every devices, which can pose high security threat to confidential data.

DLP systems utilized for enforcing data security policies

AI provides advanced protection via the machine learning technology, and hence offers complete endpoint security. The utilization of AI can efficiently aid in mitigating security threats and preventing attacks.

DLP plays a significant role in monitoring, identifying, and protecting the data in storage and in motion over the network. Certain specific data security policies are formulated in each organization and it is mandatory for the IT personnel to strictly follow them.

DLP systems are majorly utilized for enforcing data security policies in order to prevent unauthorized usage or access to confidential data. The fraud detection/anti-fraud category accounted for the major share of the market in 2019 and is predicted to dominate the market during the forecast period as well.

The AI in cybersecurity market by region

Geographically, the AI in cybersecurity market was led by North America in 2019, as stated by a the publisher report. A large number of companies are deploying cybersecurity solutions in the region, owing to the surging number of cyber-attacks.

Moreover, the presence of established players and high digitization rate are also leading to the growth of regional domain. The Asia-Pacific region is expected to progress at the fastest pace during the forecast period.

In conclusion, the market is growing due to increasing cybercrime across the globe and rising adoption of the BYOD policy.

Researchers bring deep learning to IoT devices

Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat.

deep learning IoT

MIT researchers have developed a system that could bring deep learning neural networks to new – and much smaller – places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the IoT.

The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.

The lead author is Ji Lin, a PhD student in Song Han‘s lab in MIT’s Department of Electrical Engineering and Computer Science (EECS).

The Internet of Things

The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar ’78, connected a Cola-Cola machine to the internet. The group’s motivation was simple: laziness.

They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world’s first internet-connected appliance. “This was pretty much treated as the punchline of a joke,” says Kazar, now a Microsoft engineer. “No one expected billions of devices on the internet.”

Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you’re low on milk.

IoT devices often run on microcontrollers – simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.

“How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” says Han. “Companies like Google and ARM are all working in this direction.” Han is too.

With MCUNet, Han’s group codesigned two components needed for “tiny deep learning” – the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.

System-algorithm codesign

Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient.

“It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”

So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.”

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller – with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight – instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller.

“It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.

The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time.

“We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”

In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM.

TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.

MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”

The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.

Huge potential

The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.

The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”

MCUNet could also make IoT devices more secure. “A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”

Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smart watches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information.

MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.

Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy.

“Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han.

Organizations plan to use AI and ML to tackle unknown attacks faster

Wipro published a report which provides fresh insights on how AI will be leveraged as part of defender stratagems as more organizations lock horns with sophisticated cyberattacks and become more resilient.

tackle unknown attacks

Organizations need to tackle unknown attacks

There has been an increase in R&D with 49% of the worldwide cybersecurity related patents filed in the last four years being focussed on AI and ML application. Nearly half the organizations are expanding cognitive detection capabilities to tackle unknown attacks in their Security Operations Center (SOC).

The report also illustrates a paradigm shift towards cyber resilience amid the rise in global remote work. It considers the impact of COVID-19 pandemic on cybersecurity landscape around the globe and provides a path for organizations to adapt with this new normal.

The report saw a global participation of 194 organizations and 21 partner academic, institutional and technology organizations over four months of research.

Global macro trends in cybersecurity

  • Nation state attacks target private sector: 86% of all nation-state attacks fall under espionage category, and 46% of them are targeted towards private companies.
  • Evolving threat patterns have emerged in the consumer and retail sectors: 47% of suspicious social media profiles and domains were detected active in 2019 in these sectors.

Cyber trends sparked by the global pandemic

  • Cyber hygiene proven difficult during remote work enablement: 70% of the organizations faced challenges in maintaining endpoint cyber hygiene and 57% in mitigating VPN and VDI risks.
  • Emerging post-COVID cybersecurity priorities: 87% of the surveyed organizations are keen on implementing zero trust architecture and 87% are planning to scale up secure cloud migration.

Micro trends: An inside-out enterprise view

  • Low confidence in cyber resilience: 59% of the organizations understand their cyber risks but only 23% of them are highly confident about preventing cyberattacks.
  • Strong cybersecurity spend due to board oversight & regulations: 14% of organizations have a security budget of more than 12% of their overall IT budgets.

Micro trends: Best cyber practices to emulate

  • Laying the foundation for a cognitive SOC: 49% of organizations are adding cognitive detection capabilities to their SOC to tackle unknown attacks.
  • Concerns about OT infrastructure attacks increasing: 65% of organizations are performing log monitoring of Operation Technology (OT) and IoT devices as a control to mitigate increased OT Risks.

Meso trends: An overview on collaboration

  • Fighting cyber-attacks demands stronger collaboration: 57% of organizations are willing to share only IoCs and 64% consider reputational risks to be a barrier to information sharing.
  • Cyber-attack simulation exercises serve as a strong wakeup call: 60% participate in cyber simulation exercises coordinated by industry regulators, CERTs and third-party service providers and 79% organizations have dedicated cyber insurance policy in place.

Future of cybersecurity

  • 5G security is the emerging area for patent filing: 7% of the worldwide patents filed in the cyber domain in the last four years have been related to 5G security.

Vertical insights by industry

  • Banking, financial services & insurance: 70% of financial services enterprises said that new regulations are fuelling increase in security budgets, with 54% attributing higher budgets to board intervention.
  • Communications: 71% of organizations consider cloud-hosting risk as a top risk.
  • Consumer: 86% of consumer businesses said email phishing is a top risk and 75% enterprises said a bad cyber event will lead to damaged band reputation in the marketplace.
  • Healthcare & life sciences: 83% of healthcare organizations have highlighted maintaining endpoint cyber hygiene as a challenge, 71% have highlighted that breaches reported by peers has led to increased security budget allocation.
  • Energy, natural resources and utilities: 71% organizations reported that OT/IT Integration would bring new risks.
  • Manufacturing: 58% said that they are not confident about preventing risks from supply chain providers.

Bhanumurthy B.M, President and Chief Operating Officer, Wipro said, “There is a significant shift in global trends like rapid innovation to mitigate evolving threats, strict data privacy regulations and rising concern about breaches.

“Security is ever changing and the report brings more focus, enablement, and accountability on executive management to stay updated. Our research not only focuses on what happened during the pandemic but also provides foresight toward future cyber strategies in a post-COVID world.”

Organizations need to understand risks and ethics related to AI

Despite highly publicized risks of data-sharing and AI, from facial recognition to political deepfakes, leadership at many organizations seems to be vastly underestimating the ethical challenges of the technology, NTT DATA Services reveals.

AI ethics

Just 12% of executives and 15% of employees say they believe AI will collect consumer data in unethical ways, and only 13% of executives and 19% of employees say AI will discriminate against minority groups.

Surveying 1,000 executive-level and non-executive employees across industries in North America in early 2020, the results indicate that organizations are eager to increase the pace of transformation.

AI and automation technologies play a vital role, helping businesses improve decision-making, business processes and even workplace culture. In fact, 61% say that AI will speed up innovation, and respondents say the technology is beginning to support improvements to efficiency (83%) and productivity (79%). Yet, there are many challenges with adoption and implementation, with ethical considerations and data security among the top few.

“AI presents one of the great leadership opportunities and challenges of our time. Leaders must be diligent in striking the balance, but they don’t have to go it alone,” said Eric Clark, Chief Digital Officer, NTT DATA Services.

“Our study outlines how businesses can take full advantage of emerging technologies and accelerate transformation, while taking necessary precautions on the path to responsible and secure adoption of artificial intelligence.”

Ethics and effectiveness of AI

For AI to be effective and avoid ethical pitfalls, businesses need to ensure that AI isn’t being programmed with biases that could lead to ethically charged decision-making or that cause AI to malfunction in some way.

One-quarter of executives and 36% of employees say they have experienced AI ignoring a command, and about one-fifth of both groups say AI offered them suggestions that reflected bias against a marginalized group.

Organizations do not have money or time to waste on technology investments gone wrong—so they must pivot their organizations to focus on agility, talent, change management, ethics, and other pressing issues.

Automation’s impact on the modern workforce

Modernizing the workforce means giving all employees access to the data and technologies that help them achieve optimum productivity. Most executives and employees believe that AI and automation will help improve employee effectiveness.

71% of executives say AI will make employees more efficient, 69% say it will improve employee accuracy, and 61% say it will speed up innovation. For this to happen, leaders need to invest in reskilling their workforce to get the most value out of emerging technologies.

Empowering the workforce through technology not only helps improve the bottom line, it helps drive employee retention – with 45% of employees responding they would be motivated to stay by education opportunities.

“The study overall paints a realistic picture of what we are seeing in the market,” said Tom Reuner, Senior Vice President at HFS Research.

“Going forward, enterprises will have to manage talent, organization, culture and provide the right environment for the new workforce, which seeks interesting projects and looks for meaning and motivation. AI technologies and methodologies are a critical enabler on that journey.”

AI adoption to create culture of speed, reinvention

Businesses and entire markets are being remade in terms of opportunity, operations and customer expectations, and there is no going back to the old pace of innovation. In fact, 47% of those surveyed believe failing to implement AI in some way will cause them to lose customers to competitors, and 44% think the bottom line will suffer.

However, few employees at companies surveyed think the pace of change at their organization is fast enough. In fact, less than one-third of executives and employees describe the pace of technology change, process change, or executive decision-making at their company as fast.

Even fewer—just 18% of employees and 19% of executives—say culture, which plays a major role in determining how workers respond to adjustments in technology and processes, changes quickly. This creates an opportunity for AI to drive sweeping change and speed up the pace of innovation and technology adoption.

SecOps teams turn to next-gen automation tools to address security gaps

SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals.

next-gen automation tools

Growing deployment of next-gen tools and capabilities

The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months.

These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security.

“The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” said Stephan Jou, CTO Interset at Micro Focus.

“We’re observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise.”

Organizations relying on the MITRE ATT&K framework

As the volume of threats rise, the report finds that 90 percent of organizations are relying on the MITRE ATT&K framework as a tool for understanding attack techniques, and that the most common reason for relying on the knowledge base of adversary tactics is for detecting advanced threats.

Further, the scale of technology needed to secure today’s digital assets means SOC teams are relying more heavily on tools to effectively do their jobs.

With so many responsibilities, the report found that SecOps teams are using numerous tools to help secure critical information, with organizations widely using 11 common types of security operations tools and with each tool expected to exceed 80% adoption in 2021.

Key observations

  • COVID-19: During the pandemic, security operations teams have faced many challenges. The biggest has been the increased volume of cyberthreats and security incidents (45 percent globally), followed by higher risks due to workforce usage of unmanaged devices (40 percent globally).
  • Most severe SOC challenges: Approximately 1 in 3 respondents cite the two most severe challenges for the SOC team as prioritizing security incidents and monitoring security across a growing attack surface.
  • Cloud journeys: Over 96 percent of organizations use the cloud for IT security operations, and on average nearly two-thirds of their IT security operations software and services are already deployed in the cloud.

Most cybersecurity pros believe automation will make their jobs easier

Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam.

cybersecurity automation jobs

Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% in 2019 to 21% this year.

“The concern for automation among younger professionals in cybersecurity was surprising to us. In trying to understand this sentiment, we could partially attribute it to lack of on-the-job training using automation technology,” said Samantha Humphries, security strategist at Exabeam.

“As we noted earlier this year in our State of the SOC research, ambiguity around career path or lack of understanding about automation can have an impact on job security. It’s also possible that this is a symptom of the current economic climate or a general lack of experience navigating the workforce during a global recession.”

AI and ML: A threat to job security?

Of respondents under the age of 45, 53% agreed or strongly agreed that AI and ML are a threat to their job security. This is contrasted with just 25% of respondents 45 and over who feel the same, possibly indicating that subsets of security professionals in particular prefer to write rules and manually investigate.

Interestingly, when asked directly about automation software, 89% of respondents under 45 years old believed it would improve their jobs, yet 47% are still threatened by its use. This is again in contrast with the 45 and over demographic, where 80% believed automation would simplify their work, and only 22% felt threatened by its use.

Examining the sentiments around automation by region, 47% of US respondents were concerned about job security when automation software is in use, as well as SG (54%), DE (42%), AUS (40%) and UK (33%).

In the survey, which drew insights from professionals throughout the US, the UK, AUS, Canada, India and the Netherlands, only 10% overall believed that AI and automation were a threat to their jobs.

On the flip side, there were noticeable increases in job approval across the board, with an upward trend in satisfaction around role and responsibilities (96%), salary (87%) and work/life balance (77%).

Diversity showing positive signs of improvement

When asked what else they enjoyed about their jobs, respondents listed working in an environment with professional growth (15%) as well as opportunities to challenge oneself (21%) as top motivators.

53% reported jobs that are either stressful or very stressful, which is down from last year (62%). Interestingly, despite being among those that are generally threatened by automation software, 100% of respondents aged 18-24 reported feeling secure in their roles and were happiest with their salaries (93%).

Though the number of female respondents increased this year, it remains to be seen whether this will emerge as a trend. This year’s male respondents (78%) are down 13% from last year (91%).

In 2019, nearly 41% were in the profession for at least 10 years or more. This year, a larger percentage (83%) have 10 years or less, and 34% have been in the cybersecurity industry for five years or less. Additionally, one-third do not have formal cybersecurity degrees.

“There is evidence that automation and AI/ML are being embraced, but this year’s survey exposed fascinating generational differences when it comes to professional openness and using all available tools to do their jobs,” said Phil Routley, senior product marketing manager, APJ, Exabeam.

“And while gender diversity is showing positive signs of improvement, it’s clear we still have a very long way to go in breaking down barriers for female professionals in the security industry.”

Is the skills gap preventing you from executing your enterprise strategy?

As many business leaders look to close the skills gap and cultivate a sustainable workforce amid COVID-19, an IBM Institute for Business Value (IBV) study reveals less than 4 in 10 human resources (HR) executives surveyed report they have the skills needed to achieve their enterprise strategy.

skills gap enterprise

COVID-19 exacerbated the skills gap in the enterprise

Pre-pandemic research in 2018 found as many as 120 million workers surveyed in the world’s 12 largest economies may need to be retrained or reskilled because of AI and automation in the next three years.

That challenge has only been exacerbated in the midst of the COVID-19 pandemic – as many C-suite leaders accelerate digital transformation, they report inadequate skills is one of their biggest hurdles to progress.

Employers should shift to meet new employee expectations

Ongoing consumer research also shows surveyed employees’ expectations for their employers have significantly changed during the COVID-19 pandemic but there’s a disconnect in how effective leaders and employees believe companies have been in addressing these gaps.

74% of executives surveyed believe their employers have been helping them learn the skills needed to work in a new way, compared to just 38% of employees surveyed, and 80% of executives surveyed said their company is supporting employees’ physical and emotional health, but only 46% of employees surveyed agreed.

“Today perhaps more than ever, organizations can either fail or thrive based on their ability to enable the agility and resiliency of their greatest competitive advantage – their people,” said Amy Wright, managing partner, IBM Talent & Transformation.

“Business leaders should shift to meet new employee expectations brought on by the COVID-19 pandemic, such as holistic support for their well-being, development of new skills and a truly personalized employee experiences even while working remotely.

“It’s imperative to bring forward a new era of HR – and those companies that were already on the path are better positioned to succeed amid disruption today and in the future.”

The study includes insights from more than 1,500 global HR executives surveyed in 20 countries and 15 industries. Based on those insights, the study provides a roadmap for the journey to the next era of HR, with practical examples of how HR leaders at surveyed “high-performing companies” – meaning those that outpace all others in profitability, revenue growth and innovation – can reinvent their function to build a more sustainable workforce.

Additional highlights

  • Nearly six in 10 high performing companies surveyed report using AI and analytics to make better decisions about their talent, such as skilling programs and compensation decisions. 41% are leveraging AI to identify skills they’ll need for the future, versus 8% of responding peers.
  • 65% of surveyed high performing companies are looking to AI to identify behavioral skills like growth mindset and creativity for building diverse adaptable teams, compared to 16% of peers.
  • More than two thirds of all respondents said agile practices are essential to the future of HR. However, less than half of HR units in participating organizations have capabilities in design thinking and agile practices.
  • 71% of high performing companies surveyed report they are widely deploying a consistent HR technology architecture, compared to only 11% of others.

“In order to gain long-term business alignment between leaders and employees, this moment requires HR to operate as a strategic advisor – a new role for many HR organizations,” said Josh Bersin, global independent analyst and dean of the Josh Bersin Academy.

“Many HR departments are looking to technology, such as the cloud and analytics, to support a more cohesive and self-service approach to traditional HR responsibilities. Offering employee empowerment through holistic support can drive larger strategic change to the greater business.”

skills gap enterprise

Three core elements to promote lasting change

According to the report, surveyed HR executives from high-performing companies were eight times as likely as their surveyed peers to be driving disruption in their organizations. Among those companies, the following actions are a clear priority:

  • Accelerating the pace of continuous learning and feedback
  • Cultivating empathetic leadership to support employees’ holistic well-being
  • Reinventing their HR function and technology architecture to make more real-time data-driven decisions

In the era of AI, standards are falling behind

According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.

standards development

As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.

Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.

All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.

In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.

A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.

In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.

Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.

Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.

Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.

Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.

Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.

37% of remote employees have no security restrictions on corporate devices

ManageEngine unveiled findings from a report that analyzes behaviors related to personal and professional online usage patterns.

security restrictions devices

Security restrictions on corporate devices

The report combines a series of surveys conducted among nearly 1,500 employees amid the pandemic as many people were accelerating online usage due to remote work and stay-at-home orders. The findings evaluate users’ web browsing habits, opinions about AI-based recommendations, and experiences with chatbot-based customer service.

“This research illuminates the challenges of unsupervised employee behaviors, and the need for behavioral analytics tools to help ensure business security and productivity,” said Rajesh Ganesan, vice president at ManageEngine.

“While IT teams have played a crucial role in supporting remote work and business continuity during the pandemic, now is an important time to evaluate the long-term effectiveness of current strategies and augment data analytics to IT operations that will help sustain seamless, secure operations.”

Risky online behaviors could compromise corporate data and devices

63% of respondents report that their organization has provided them with a corporate device to utilize while working remotely.

Interestingly, 37% of those respondents also say that there are no security restrictions on these corporate devices. Therefore, risky online activities such as visiting unsecured websites, sharing personal information, and downloading third-party software could pose potential threats.

For example, 54% said they would still visit a website after receiving a warning about potential insecurities. This percentage is also significantly higher among younger generations – including 42% of people 18-24 years and 40% of 25-34 years.

Remote work has its hiccups, but IT teams have been responsive

79% of respondents say they experience at least one technology issue weekly while working from home. The most common issues include slowed functionality and download speeds (40%) and reliable connectivity (25%).

However, IT teams have been committed to solving these challenges. For example, 75% of respondents say it’s been easy to communicate with their IT teams to resolve these issues. Chatbots, AI, and automation are becoming increasingly more effective and trusted.

76% said their experience with chatbot-based support has been “excellent” or “satisfactory,” and 55% said their issue was resolved in a timely manner. As it relates to artificial intelligence, 67% say they trust these solutions to make recommendations for them.

The increasing comfort with automation technologies can help IT teams support both front and back-end business functions, especially during times of increased online activities due to the pandemic.

Progress in implementing ethical and trusted AI-enabled systems still inconsistent

COVID-19 has put a spotlight on ethical issues emerging from the increased use of AI applications and the potential for bias and discrimination.

ethical AI systems

A report from the Capgemini Research Institute found that in 2020 45% of organizations have defined an ethical charter to provide guidelines on AI development, up from 5% in 2019, as businesses recognize the importance of having defined standards across industries.

However, a lack of leadership in terms of how these systems are developed and used is coming at a high cost for organizations.

The report notes that while organizations are more ethically aware, progress in implementing ethical AI has been inconsistent. For example, the progress on “fairness” (65%) and “auditability” (45%) dimensions of ethical AI has been non-existent, while transparency has dropped from 73% to 59%, despite the fact that 58% of businesses say they have been building awareness amongst employees about issues that can result from the use of AI.

The research also reveals that 70% of customers want a clear explanation of results and expect organizations to provide AI interactions that are transparent and fair.

Ethical governance has become a prerequisite

The need for organizations to implement an ethical charter is also driven by increased regulatory frameworks. For example, the European Commission has issued guidelines on the key ethical principles that should be used for designing AI applications.

Meanwhile, guidelines issued by the FTC in early 2020 call for transparent AI, stating that when an AI-enabled system makes an adverse decision (such as declining credit for a customer), then the organization should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.

However, while globally 73% of organizations informed users about the ways in which AI decisions might affect them in 2019, today, this has dropped to 59%.

According to the report, this is indicative of current circumstances brought about by COVID-19, growing complexity of AI models, and a change in consumer behavior, which has disrupted the functionalities of the AI algorithms.

New factors, including a preference of safety, bulk buying, and a lack of training data for similar situations from the past, has meant that organizations are redesigning their systems to suit a new normal; however, this has led to less transparency.

Discriminatory bias with AI systems come at a high cost for orgs

Many public and private institutions deployed a range of AI technologies during COVID-19 in an attempt to curtail the impacts wrought by the pandemic. As these continue, it is critical for organizations to uphold customer trust by furthering positive relationships between AI and consumers. However, reports show that datasets collected for healthcare and the public sector are subjected to social and cultural bias.

This is not limited to just the public sector. The research found that 65% of executives said they were aware of the issue of discriminatory bias with AI systems. Further, close to 60% of organizations have attracted legal scrutiny and 22% have faced a customer backlash in the last two to three years because of decisions reached by AI systems.

In fact, 45% of customers noted they will share their negative experiences with family and friends and urge them not to engage with an organization, 39% will raise their concerns with the organization and demand an explanation, and 39% will switch from the AI channel to a higher-cost human interaction. 27% of consumers say they would cease dealing with the organization altogether.

Establish ownership of ethical issues – leaders must be accountable

Only 53% of organizations have a leader who is responsible for the ethics of AI systems at their organization, such as a Chief Ethics Officer. It is crucial to establish leadership at the top to ensure these issues receive due priority from top management and to create ethically robust AI systems.

In addition, leaders in business and technology functions must be fully accountable for the ethical outcomes of AI applications. Our research shows that only half said they had a confidential hotline or ombudsman to enable customers and employees to raise ethical issues with AI systems.

The report highlights seven key actions for organizations to build an ethically robust AI system, which need to be underpinned by a strong foundation of leadership, governance, and internal practices:

  • Clearly outline the intended purpose of AI systems and assess its overall potential impact
  • Proactively deploy AI for the benefit of society and environment
  • Embed diversity and inclusion principles throughout the lifecycle of AI systems
  • Enhance transparency with the help of technology tools
  • Humanize the AI experience and ensure human oversight of AI systems
  • Ensure technological robustness of AI systems
  • Protect people’s individual privacy by empowering them and putting them in charge of AI interactions

Anne-Laure Thieullent, Artificial Intelligence and Analytics Group Offer Leader at Capgemini, explains, “Given its potential, it would be a disservice if the ethical use of AI is only limited to ensure no harm to users and customers. It should be a proactive pursuit of environmental good and social welfare.

“AI is a transformational technology with the power to bring about far-reaching developments across the business, as well as society and the environment. This means governmental and non-governmental organizations that possess the AI capabilities, wealth of data, and a purpose to work for the welfare of society and environment must take greater responsibility in tackling these issues to benefit societies now and in the future.”

Inadequate skills and employee burnout are the biggest barriers to digital transformation

Nearly six in ten organizations have accelerated their digital transformation due to the COVID-19 pandemic, an IBM study of global C-suite executives revealed.

barriers digital transformation

Top priorities are shifting dramatically as executives plan for an uncertain future

Digital transformation barriers

Traditional and perceived barriers like technology immaturity and employee opposition to change have fallen away – in fact, 66% of executives surveyed said they have completed initiatives that previously encountered resistance.

Participating businesses are seeing more clearly the critical role people play in driving their ongoing transformation. Leaders surveyed called out organizational complexity, inadequate skills and employee burnout as the biggest hurdles to overcome – both today and in the next two years.

The study finds a significant disconnect in how effective leaders and employees believe companies have been in addressing these gaps. 74% of executives surveyed believe they have been helping their employees learn the skills needed to work in a new way, just 38% of employees surveyed agree.

80% of executives surveyed say that they are supporting the physical and emotional health of their workforce, while just 46% of employees surveyed feel that support.

The study which includes input from more than 3,800 C-suite executives in 20 countries and 22 industries, shows that executives surveyed are facing a proliferation of initiatives due to the pandemic and having difficulty focusing, but do plan to prioritize internal and operational capabilities such as workforce skills and flexibility – critical areas to address in order to jumpstart progress.

“For many the pandemic has knocked down previous barriers to digital transformation, and leaders are increasingly relying on technology for mission-critical aspects of their enterprise operations,” said Mark Foster, senior vice president, IBM Services.

“But looking ahead, leaders need to redouble their focus on their people as well as the workflows and technology infrastructure that enable them – we can’t underestimate the power of empathetic leadership to drive employees’ confidence, effectiveness and well-being amid disruption.”

The study reveals three proactive steps that emerging leaders surveyed are taking to survive and thrive.

Improving operational scalability and flexibility

The ongoing disruption of the pandemic has shown how important it can be for businesses to be built for change. Many executives are facing demand fluctuations, new challenges to support employees working remotely and requirements to cut costs.

In addition, the study reveals that the majority of organizations are making permanent changes to their organizational strategy. For instance, 94% of executives surveyed plan to participate in platform-based business models by 2022, and many reported they will increase participation in ecosystems and partner networks.

Executing these new strategies may require a more scalable and flexible IT infrastructure. Executives are already anticipating this: the survey showed respondents plan a 20 percentage point increase in prioritization of cloud technology in the next two years.

What’s more, executives surveyed plan to move more of their business functions to the cloud over the next two years, with customer engagement and marketing being the top two cloudified functions.

Applying AI and automation to help make workflows more intelligent

COVID-19 has disrupted critical workflows and processes at the heart of many organizations’ core operations. Technologies like AI, automation and cybersecurity that could help make workflows more intelligent, responsive and secure are increasing in priority across the board for responding global executives. Over the next two years, the report finds:

  • Prioritization of AI technology will increase by 20 percentage points
  • 60% of executives surveyed say they have accelerated process automation, and many will increasingly apply automation across all business functions
  • 76% of executives surveyed plan to prioritize cybersecurity – twice as many as deploy the technology today.

As executives increasingly invest in cloud, AI, automation and other exponential technologies, leaders should keep in mind the users of that technology – their people. These digital tools should enable a positive employee experience by design, and support people’s innovation and productivity.

barriers digital transformation

COVID-19 created a sense of urgency around digital transformation

Leading, engaging and enabling the workforce in new ways

The study showed placing a renewed focus on people may be critical amid the COVID-19 pandemic while many employees are working outside of traditional offices and dealing with heightened personal stress and uncertainty.

Ongoing IBV consumer research has shown that the expectations employees have of their employers have shifted amidst the pandemic – employees now expect that their employers will take an active role in supporting their physical and emotional health as well as the skills they need to work in new ways.

To address this gap, executives should place deeper focus on their people, putting employees’ end-to-end well-being first. Empathetic leaders who encourage personal accountability and support employees to work in self-directed squads that apply design thinking, Agile principles and DevOps tools and techniques can be beneficial.

Organizations should also think about adopting a holistic, multi-modal model of skills development to help employees develop both the behavioral and technical skills required to work in the new normal and foster a culture of continuous learning.

Ongoing and initial costs top list of barriers to 5G implementation

5G is set to deliver higher data transfer rates for mission-critical communications and will allow massive broadband capacities, enabling high-speed communication across various applications such as the Internet of Things (IoT), robotics, advanced analytics and artificial intelligence.

barriers 5G implementation

According to a study from CommScope, only 46% of respondents feel their current network infrastructure is capable of supporting 5G, but 68% think 5G will have a significant impact on their agency operations within one to four years.

Of the respondents who do not feel their current infrastructure is capable of supporting 5G, none have deployed 5G, 19% are piloting, 43% are planning to pilot, and 52% are not planning or evaluating whether to pilot 5G.

Costs reported as top barriers to 5G implementation

According to the report, ongoing and initial costs are reported as top barriers for federal agencies wishing to implement 5G – 44% believe initial/up-front costs will be the biggest barrier and 49% are concerned about ongoing costs.

“There is no single approach to 5G and no one-size-fits-all 5G solution,” said Chris Collura, vice president, Federal business for CommScope.

“This study indicates that federal agencies are at the beginning stages of 5G evaluation and deployment. As they are looking to finalize their strategy for connectivity, agencies should also consider private networks, whether those are private LTE networks, private 5G networks, or a migration from one to the other to ensure flexibility and scalability.”

Desired outcomes for federal agencies

Remote employee productivity (40%) is one of the top desired outcomes for federal agencies looking to implement 5G, along with introducing high bandwidth (39%), higher throughput (39%) and better connectivity (38%).

Additional findings from the study include:

  • 32% hope that 5G will make it easier to share information securely and 32% would like to see easier access to data
  • 82% plan to or have already adopted 5G with 6% having already deployed 5G, 14% piloting 5G and 62% evaluating/planning to pilot 5G
  • 71% are looking at hardware, software or endpoint upgrades to support 5G
  • 83% believe it is very/somewhat important for mission-critical traffic on the agency network to remain onsite while 64% feel it is very/somewhat important

Intelligent processes and tech increase enterprises’ competitiveness

Enterprises of the future will be built on a foundation of artificial intelligence (AI), analytics, machine learning, deep learning and automation, that are central to solving business problems and driving innovation, Wipro finds.

intelligent processes

Most businesses consider AI to be critical to improve operational efficiency, reduce employee time on manual tasks, and enhance the employee and customer experience.

The report examines the current landscape and shows the challenges and the driving factors for businesses to become truly intelligent enterprises. Wipro surveyed 300 respondents in UK and US across key industry sectors like financial services, healthcare, technology, manufacturing, retail and consumer goods.

The report highlights that while collecting data is critical, the ability to combine this with a host of technologies to leverage insights creates an intelligent enterprise. Organizations that fast-track adoption of intelligent processes and technologies stand to gain an immediate competitive advantage over their counterparts.

Key findings

  • While 80% of organizations recognize the importance of being intelligent, only 17% would classify their organizations as an Intelligent Enterprise.
  • 98% of those surveyed believe that being an Intelligent Enterprise yields benefits to organizations. The most important ones being improved customer experience, faster business decisions and increased organizational agility.
  • 91% of organizations feel there are data barriers towards being an Intelligent Enterprise, with security, quality and seamless integration being of utmost concern.
  • 95% of business leaders surveyed see AI as critical to being Intelligent Enterprises, yet, currently, only 17% can leverage AI across the entire organization.
  • 74% of organizations consider investment in technology as the most likely enabler for an Intelligent Enterprise, however 42% of them think that this must be complemented with efforts to re-skill workforce.

Jayant Prabhu, VP & Head – Data, Analytics & AI, Wipro said, “Organizations now need new capabilities to navigate the current challenges. The report amplifies the opportunity to gain a first-mover advantage to being Intelligent.

“The ability to take productive decisions depends on an organization’s ability to generate accurate, fast and actionable intelligence. Successful organizations are those that quickly adapt to the new technology landscape to transform into an Intelligent Enterprise.”

Are today’s organizations ready for the data age?

67% of business and IT managers expect the sheer quantity of data to grow nearly five times by 2025, a Splunk survey reveals.

data age

The research shows that leaders see the significant opportunity in this explosion of data and believe data is extremely or very valuable to their organization in terms of: overall success (81%), innovation (75%) and cybersecurity (78%).

81% of survey respondents believe data to be very or highly valuable yet 57% fear that the volume of data is growing faster than their organizations’ ability to keep up.

“The aata age is here. We can now quantify how data is taking center stage in industries around the world. As this new research demonstrates, organizations understand the value of data, but are overwhelmed by the task of adjusting to the many opportunities and threats this new reality presents,” said Doug Merritt, President and CEO, Splunk.

“There are boundless opportunities for organizations willing to quickly learn and adapt, embrace new technologies and harness the power of data.”

The data age has been accelerated by emerging technologies powered by, and contributing to, exponential data growth. Chief among these emerging technologies are Edge Computing, 5G networking, IoT, AI/ML, AR/VR and Blockchain.

It’s these very same technologies 49% of those surveyed expect to use to harness the power of data, but across technologies, on average, just 42% feel they have high levels of understanding of all six.

Data is valuable, and data anxiety is real

To thrive in this new age, every organization needs a complete view of its data — real-time insight, with the ability to take real-time action. But many organizations feel overwhelmed and unprepared. The study quantifies the emergence of a data age as well as the recognition that organizations have some work to do in order to use data effectively and be successful.

  • Data is extremely or very valuable to organizations in terms of: overall success (81%), innovation (75%) and cybersecurity (78%).
  • And yet, 66% of IT and business managers report that half or more of their organizations’ data is dark (untapped, unknown, unused) — a 10% increase over the previous year.
  • 57% say the volume of data is growing faster than their organizations’ ability to keep up.
  • 47% acknowledge their organizations will fall behind when faced with rapid data volume growth.

Some industries are more prepared than others

The study quantifies the emergence of a data age and the adoption of emerging technologies across industries, including:

  • Across industries, IoT has the most current users (but only 28%). 5G has the fewest and has the shortest implementation timeline at 2.6 years.
  • Confidence in understanding of 5G’s potential varies: 59% in France, 62% in China and only 24% in Japan.
  • For five of the six technologies, financial services leads in terms of current development of use cases. Retail comes second in most cases, though retailers lag notably in adoption of AI.
  • 62% of healthcare organizations say that half or more of their data is dark and that they struggle to manage and leverage data.
  • The public sector lags commercial organizations in adoption of emerging technologies.
  • Manufacturing leaders predict growth in data volume (78%) than in any other industry; 76% expect the value of data to continue to rise.

Some countries are more prepared than others

The study also found that countries seen as technology leaders, like the U.S. and China, are more likely to be optimistic about their ability to harness the opportunities of the data age.

  • 90% of business leaders from China expect the value of data to grow. They are by far the most optimistic about the impact of emerging technologies, and they are getting ready. 83% of Chinese organizations are prepared, or are preparing, for rapid data growth compared to just 47% across all regions.
  • U.S. leaders are the second most confident in their ability to prepare for rapid data growth, with 59% indicating that they are at least somewhat confident.
  • In France, 59% of respondents say that no one in their organization is having conversations about the impact of the data age. Meanwhile, in Japan 67% say their organization is struggling to stay up to date, compared to the global average of 58%.
  • U.K. managers report relatively low current usage of emerging technologies but are optimistic about plans to use them in the future. For example, just 19% of U.K. respondents say they are currently using AI/ML technologies, but 58% say they will use them in the near future.

Reduced lifespan of TLS certificates could cause increase in outages

Beginning September 1st, all publicly trusted TLS certificates must have a lifespan of 398 days or less. According to security experts from Venafi, this latest change is another indication that machine identity lifetimes will continue to shrink.

TLS certificates lifespan

Since many organizations lack the automation capabilities necessary to replace certificates with short lifespans at machine scale and speed, they are likely to see sharp increases in outages caused by unexpected certificate expirations.

“Apple’s unilateral move to reduce machine identity lifespans will profoundly impact businesses and governments globally,” said Kevin Bocek, vice president of security strategy and threat intelligence at Venafi.

“The interval between certificate lifecycle changes is shrinking, while at the same time, certificates lifecycles themselves are being reduced. In addition, the number of machines—including IoT and smart devices, virtual machines, AI algorithms and containers—that require machine identities is skyrocketing.

“It seems inevitable that certificate-related outages, similar to those that have haunted Equifax, LinkedIn, and the State of California, will spiral out-of-control over the next few years.”

Certificate lifespans

The interval between changes in the length of certificate lifespans has been shrinking over the last decade:

  • Pre-2011: Certificate lifespans were 8–10 years (96 months)
  • 2012: Certificate lifespans were shortened to 60 months (five years), a reduction of 37%. This change was preplanned in CA/Browser Forum Baseline Requirements.
  • 2015: Certificate lifespans were shortened to 39 months (3 years), a reduction of 35%. This change happened three years after the five-year limitation was adopted.
  • 2018: Certificate lifespans were shortened to 27 months (two years), a reduction of 30%. This change happened two years after the three-year limitation was adopted.
  • 2020: Certificate lifespans were shortened to 13 months, a reduction of 51%. This change happened one year after the two-year limitation was adopted.

Bocek continued: “If the interval between lifecycle changes continues on its current cadence, it’s likely that we could see certificate lifespans for all publicly trusted TLS certificates reduced to 6 months by early 2021 and perhaps become as short as three months by the end of next year.

“Actions by Apple, Google or Mozilla could accomplish this. Ultimately, the only way for organizations to eliminate this external, outside risk is total visibility, comprehensive intelligence and complete automation for TLS machine identities.”

Digital keys and certificates act as machine identities

They control the flow of sensitive data to trusted machines in a wide range of security and operational systems.

Enterprises rely on machine identities to connect and encrypt over 330 million internet domains, over 1.8 billion websites and countless applications. When these certificates expire unexpectedly, the machines or applications they identify will cease to communicate with other machines, shutting down critical business processes.

Unfortunately, eliminating certificate-related outages within complex, multitiered architectures can be challenging. Ownership and control of these certificates often reside in different parts of the organization, with certificates sometimes shared across multiple layers of infrastructure.

These problems are exacerbated by the fact that most organizations have certificate renewal processes that are prone to human error. When combined, these factors make outage prevention a complex process that is made much more difficult by shorter certificate lifetimes.

Worldwide AI spending to reach more than $110 billion in 2024

Global spending on AI is forecast to double over the next four years, growing from $50.1 billion in 2020 to more than $110 billion in 2024.

ai spending forecast 2024

According to IDC, spending on AI systems will accelerate over the next several years as organizations deploy artificial intelligence as part of their digital transformation efforts and to remain competitive in the digital economy. The compound annual growth rate (CAGR) for the 2019-2024 period will be 20.1%.

“Companies will adopt AI — not just because they can, but because they must,” said Ritu Jyoti, Program VP, Artificial Intelligence at IDC.

“AI is the technology that will help businesses to be agile, innovate, and scale. The companies that become ‘AI powered’ will have the ability to synthesize information (using AI to convert data into information and then into knowledge), the capacity to learn (using AI to understand relationships between knowledge and apply the learning to business problems), and the capability to deliver insights at scale (using AI to support decisions and automation).”

Two of the leading drivers for AI adoption are delivering a better customer experience and helping employees to get better at their jobs. This is reflected in the leading use cases for AI, which include automated customer service agents, sales process recommendation and automation, automated threat intelligence and prevention, and IT automation. Combined, these four use cases will represent nearly a third of all AI spending this year. Some of the fastest growing use cases are automated human resources, IT automation, and pharmaceutical research and discovery.

AI spending forecast by industry

The two industries that will spend the most on AI solutions throughout the forecast are retail and banking. The retail industry will largely focus its AI investments on improving the customer experience via chatbots and recommendation engines while banking will include spending on fraud analysis and investigation and program advisors and recommendation systems.

Discrete manufacturing, process manufacturing, and healthcare will round out the top 5 industries for AI spending in 2020. The industries that will see the fastest growth in AI spending over the 2020-2024 forecast are media, federal/central government, and professional services.

COVID-19 caused a slowdown in AI investments across the transportation industry as well as the personal and consumer services industry, which includes leisure and hospitality businesses. These industries will be cautious with their AI investments in 2020 as their focus will be on cost containment and revenue generation rather than innovation or digital experiences,” said Andrea Minonne, senior research analyst, Customer Insights & Analysis, IDC.

“On the other hand, AI has played a role in helping societies deal with large-scale disruptions caused by quarantines and lockdowns. Some european governments have partnered with AI start-ups to deploy AI solutions to monitor the outcomes of their social distancing rules and assess if the public was complying with rules. Also, hospitals across Europe are using AI to speed up COVID-19 diagnosis and testing, to provide automated remote consultations, and to optimize capacity at hospitals.”

“In the short term, the pandemic caused supply chain disruptions and store closures with continued impact expected to linger into 2021 and the outyears. For the most impacted industries, this has caused some delays in AI deployments,” said Stacey Soohoo, research manager, Customer Insights & Analysis, IDC.

“Elsewhere, enterprises have seen a silver lining in the current situation: an opportunity to become more resilient and agile in the long run. Artificial intelligence continues to be a key technology in the road to recovery for many enterprises and adopting artificial intelligence will help many to rebuild or enhance future revenue streams and operations.”

Software, hardware and geographical trends

Software and services will each account for a little more than one third of all AI spending this year with hardware delivering the remainder. The largest share of software spending will go to AI applications ($14.1 billion) while the largest category of services spending will be IT services ($14.5 billion).

Servers ($11.2 billion) will dominate hardware spending. Software will see the fastest growth in spending over the forecast period with a five-year CAGR of 22.5%.

On a geographic basis, the United States will deliver more than half of all AI spending throughout the forecast, led by the retail and banking industries. Western Europe will be the second largest geographic region, led by banking, retail, and discrete manufacturing.

China will be the third largest region for AI spending with state/local government, banking, and professional services as the leading industries. The strongest spending growth over the five-year forecast will be in Japan (32.1% CAGR) and Latin America (25.1% CAGR).

Facing gender bias in facial recognition technology

In the 1960s, Woodrow W. Bledsoe created a secret program that manually identified points on a person’s face and compared the distances between these coordinates with other images.

facial recognition bias

Facial recognition technology has come a long way since then. The field has evolved quickly and software can now automatically process staggering amounts of facial data in real time, dramatically improving the results (and reliability) of matching across a variety of use cases.

Despite all of the advancements we’ve seen, many organizations still rely on the same algorithm used by Bledsoe’s database – known as “k-nearest neighbors” or k-NN. Since each face has multiple coordinates, a comparison of these distances over millions of facial images requires significant data processing. The k-NN algorithm simplifies this process and makes matching these points easier by considerably reducing the data set. But that’s only part of the equation. Facial recognition also involves finding the location of a feature on a face before evaluating it. This requires a different algorithm such as HOG (we’ll get to it later).

The problem

The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.

While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.

Previous studies on this topic found that facial recognition software suffers from racial biases, but overall, the research on bias has been thin. The consequences of such biases can be dire for both people and companies. Further complicating matters is the fact that even small changes to one’s face, hair or makeup can impact a model’s ability to accurately match faces. If not accounted for, this can create distinct challenges when trying to leverage facial recognition technology to identify women, who generally tend to use beauty and self-care products more than men.

Understanding sexism in facial recognition software

Just how bad are gender-based misidentifications? Our team at WatchGuard conducted some additional facial recognition research, looking solely at gender biases to find out. The results were eye-opening. The solutions we evaluated was misidentifying women 18% more often than men.

You can imagine the terrible consequences this type of bias could generate. For example, a smartphone relying on face recognition could block access, a police officer using facial recognition software could mistakenly identify an innocent bystander as a criminal, or a government agency might call in the wrong person for questioning based on a false match. The list goes on. The reality is that the culprit behind these issues is bias within model training that creates biases in the results.

Let’s explore how we uncovered these results.

Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards.

Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly.

For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time.

Amazon Rekognition correctly identified all pictures we provided. However, when we looked more closely at the data provided, our team saw a wider distribution of the similarities in female faces than in males. We saw more female faces with higher similarities then men and more female faces with less similarities than men (this actually matches a recent study performed around the same time).

What does this mean? Essentially it means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over male if given enough images with faces.

Amazon Rekognition gave accurate results but lacked in consistency and precision between male and female faces. Male faces on average were 99.06% similar, but female faces on average were 98.43% similar. This might not seem like a big variance, but the gap widened when we looked at the outliers – a standard deviation of 1.64 for males versus 2.83 for females. More female faces fall farther from the average then male faces, meaning female false match is far more likely than the 0.6% difference based on our data.

Dlib didn’t perform as well. On average, Dlib misidentified female faces more than male, leading to an average rate of 5% more misidentified females. When comparing faces using the slower HOG, the differences grew to 18%. Of interest, our team found that on average, female faces have higher similarity scores then male when using Dlib, but like Amazon Rekognition, also have a larger spectrum of similarity scores leading to the low results we found in accuracy.

Tackling facial recognition bias

Unfortunately, facial recognition software providers struggle to be transparent when it comes to the efficacy of their solutions. For example, our team didn’t find any place in Amazon’s documentation in which users could review the processing results before the software made a positive or negative match.

Unfortunately, this assumption of accuracy (and lack of context from providers) will likely lead to more and more instances of unwarranted arrests, like this one. It’s highly unlikely that facial recognition models will reach 100% accuracy anytime soon, but industry participants must focus on improving their effectiveness nonetheless. Knowing that these programs contain biases today, law enforcement and other organizations should use them as one of many tools – not as a definitive resource.

But there is hope. If the industry can honestly acknowledge and address the biases in facial recognition software, we can work together to improve model training and outcomes, which can help reduce misidentifications not only based on gender, but race and other variables, too.

Researchers develop AI technique to protect medical devices from anomalous instructions

Researchers at Ben-Gurion University of the Negev have developed a new AI technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors.

AI protect medical devices

Complex medical devices such as CT (computed tomography), MRI (magnetic resonance imaging) and ultrasound machines are controlled by instructions sent from a host PC.

Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Threats can occur due to cyberattacks, human errors such as a technician’s configuration mistake or host PC software bugs.

Dual-layer architecture: AI technique to protect medical devices

As part of his Ph.D. research, BGU researcher Tom Mahler has developed a technique using artificial intelligence that analyzes the instructions sent from the PC to the physical components using a new architecture for the detection of anomalous instructions.

“We developed a dual-layer architecture for the protection of medical devices from anomalous instructions,” Mahler says.

“The architecture focuses on detecting two types of anomalous instructions: (1) context-free (CF) anomalous instructions which are unlikely values or instructions such as giving 100x more radiation than typical, and (2) context-sensitive (CS) anomalous instructions, which are normal values or combinations of values, of instruction parameters, but are considered anomalous relative to a particular context, such as mismatching the intended scan type, or mismatching the patient’s age, weight, or potential diagnosis.

“For example, a normal instruction intended for an adult might be dangerous [anomalous] if applied to an infant. Such instructions may be misclassified when using only the first, CF, layer; however, by adding the second, CS, layer, they can now be detected.”

Improving anomaly detection performance

The research team evaluated the new architecture in the CT domain, using 8,277 recorded CT instructions and evaluated the CF layer using 14 different unsupervised anomaly detection algorithms. Then they evaluated the CS layer for four different types of clinical objective contexts, using five supervised classification algorithms for each context.

Adding the second CS layer to the architecture improved the overall anomaly detection performance from an F1 score of 71.6%, using only the CF layer, to between 82% and 99%, depending on the clinical objective or the body part.

Furthermore, the CS layer enables the detection of CS anomalies, using the semantics of the device’s procedure, an anomaly type that cannot be detected using only the CF layer.