The introduction of 5G will change the way we communicate, multiply the capacity of the information highways, and allow everyday objects to connect to each other in real time.
Its deployment constitutes a true technological revolution not without some security hazards. Until 5G technology has definitively expanded, some challenges remain to be resolved, including those concerning possible eavesdropping, interference and identity theft.
Unmanned Aerial Vehicles (UAV), also known as drones, are emerging as enablers for supporting many applications and services, such as precision agriculture, search and rescue, or in the field of communications, for temporary network deployment and their coverage extension and security.
Giovanni Geraci, a researcher with the Department of Information and Communication Technologies (DTIC) at UPF, points out in a recent study: “On the one hand, it is important to protect the network when it is disturbed by a drone that has connected and generates interference. On the other, in the future, the same drones could assist in the prevention, detection, and recovery of attacks on 5G networks”.
The study poses two different cases
First, the use of UAVs to prevent possible attacks, still in its early stages of research, and, secondly, how to protect the network when disturbed by a drone, a much more realistic, as Geraci explains: “A drone could be the source of interference to users. This can happen if the drone is very high up and when its transmissions travel a long distance because there are no obstacles in the way, such as buildings”.
The integration of UAV devices in future mobile networks may expose the latter to potential risks of attack based on UAVs. UAVs with cellular connection may experience radio propagation characteristics that are probably different from those experienced by a terrestrial user.
Once a UAV flies well above the base stations, they can create interference or even rogue applications, such as a mobile phone connected to a UAV without authorization.
Using drones to improve 5G security
Based on the premise that 5G terrestrial networks will never be 100% secure, the authors of this study also suggest using UAVs to improve 5G network security and beyond wireless access.
“In particular, in our research we have considered jamming, identity theft, or ‘spoofing’, eavesdropping, and the mitigation mechanisms that are enabled by the versatility of UAVs”, the researchers explain.
The study shows several areas in which the diversity and 3D mobility of UAVs can effectively improve the security of advanced wireless networks against eavesdropping, interference and ‘spoofing’, before they occur or for rapid detection and recovery.
“The article raises open questions and research directions, including the need for experimental evaluation and a research platform for prototyping and testing the proposed technologies”, Geraci explains.
73% of security and IT executives are concerned about new vulnerabilities and risks introduced by the distributed workforce, Skybox Security reveals.
The report also uncovered an alarming disconnect between confidence in security posture and increased cyberattacks during the global pandemic.
Digital transformation creating the perfect storm
To protect employees from COVID-19, enterprises rapidly shifted to make work from home possible and maintain business productivity. Forced to accelerate digital transformation initiatives, this created the perfect storm.
2020 will be a record-breaking year for new vulnerabilities with a 34% increase year-over-year – a leading indicator for the growth of future attacks.
As a result, security teams now have more to protect than ever before. Surveying 295 global executives, the report found that organizations are overconfident in their security posture, and new strategies are needed to secure a long-term distributed workforce.
- Deprioritized security tasks increase risk: Over 30% of security executives said software updates and BYOD policies were deprioritized. Further, 42% noted reporting was deprioritized since the onset of the pandemic.
- Enterprises can’t keep up with the pace: 32% had difficulties validating if network and security configurations undermined security posture. 55% admitted that it was at least moderately difficult for them to validate network and security configurations did not increase risk.
- Security teams are overconfident in security posture: Only 11% confirmed they could confidently maintain a holistic view of their organizations’ attack surfaces. Shockingly, 93% of security executives were still confident that changes were correctly validated.
- The distributed workforce is here to stay: 70% of respondents projected that at least one-third of their employees will remain remote 18 months from now.
“Traditional detect-and-respond approaches are no longer enough. A radical new approach is needed – one that is rooted in the development of preventative and prescriptive vulnerability and threat management practices,” said Gidi Cohen, CEO, Skybox Security.
“To advance change, it is integral that everything, including data and talent, is working towards enriching the security program as a whole.”
As COVID-19 lockdown measures were implemented in March-April 2020, consumer and business behavioral changes transformed the internet’s shape and how people use it virtually overnight. Many networks experienced a year’s worth of traffic growth (30-50%) in just a few weeks, Nokia reveals.
By September, traffic had stabilized at 20-30% above pre-pandemic levels, with further seasonal growth to come. From February to September, there was a 30% increase in video subscribers, a 23% increase in VPN end-points in the U.S., and a 40-50% increase in DDoS traffic.
Ready for COVID-19
In the decade prior to the pandemic, the internet had already seen massive and transformative changes – both in service provider networks and in the evolved internet architectures for cloud content delivery. Investment during this time meant the networks were in good shape and mostly ready for COVID-19 when it arrived.
Manish Gulyani, General Manager and Head of Nokia Deepfield, said: “Never has so much demand been put on the networks so suddenly, or so unpredictably. With networks providing the underlying connectivity fabric for business and society to function as we shelter-in-place, there is a greater need than ever for holistic, multi-dimensional insights across networks, services, applications and end users.”
The networks were made for this
While the networks held up during the biggest demand peaks, data from September 2020 indicates that traffic levels remain elevated even as lockdowns are eased; meaning, service providers will need to continue to engineer headroom into the networks for future eventualities.
Content delivery chains are evolving
Demand for streaming video, low-latency cloud gaming and video conferencing, and fast access to cloud applications and services, all placed unprecedented pressure on the internet service delivery chain.
Just as Content Delivery Networks (CDNs) grew in the past decade, it’s expected the same will happen with edge/far edge cloud in the next decade – bringing content and compute closer to end users.
Residential broadband networks have become critical infrastructure
With increased needs (upstream traffic was up more than 30%), accelerating rollout of new technologies – such as 5G and next-gen FTTH – will go a long way towards improving access and connectivity in rural, remote and underserved areas.
Better analytical insights enable service providers to keep innovating and delivering flawless service and loyalty-building customer experiences.
Deep insight into network traffic is essential
While the COVID-19 era may prove exceptional in many ways, the likelihood is that it has only accelerated trends in content consumption, production and delivery that were already underway.
Service providers must be able to have real-time, detailed network insights at their disposal – fully correlated with internet traffic insights – to get a holistic perspective on their network, services and consumption.
Security has never been more important
During the pandemic, DDoS traffic increased between 40-50%. As broadband connectivity is now largely an essential service, protecting network infrastructure and services becomes critical.
Agile and cost effective DDoS detection and automated mitigation are becoming paramount mechanisms to protect service provider infrastructures and services.
COVID-19 and the subsequent global recession have thrown a wrench into IT spending. Many enterprises have placed new purchases on hold. Gartner recently projected that global spending on IT would drop 8% overall this year — and yet dollars allocated to cloud-based services are still expected to rise by approximately 19 percent, bucking that downward trend.
Underscoring the relative health of the cloud market, IDC reported that all growth in traditional tech spending will be driven by four platforms over the next five years: cloud, mobile, social and big data/analytics. Their 2020-2023 forecast states that traditional software continues to represent a major contribution to productivity, while investments in mobile and cloud hardware have created new platforms which will enable the rapid deployment of new software tools and applications.
With entire workforces suddenly going remote all over the world, there certainly are a number of specific business problems that need to be addressed, and many of the big issues involve VPNs.
Assault on VPNs
Millions of employees are working from home, and they all have to securely access their corporate networks. The vast majority of enterprises still rely on on-premises servers to some degree (estimates range from 60% to 98%), therefore VPNs play a vital role in enabling that employee connection to the network. This comes at a cost, though: bandwidth is gobbled up, slowing network performance — sometimes to a crippling level — and this has repercussions.
Maintenance of the thousands of machines and devices connected to the network gets sacrificed. The deployment of software, updates and patches simply doesn’t happen with the same regularity as when everyone works on-site. One reason for this is that content distribution (patches, applications and other updates) can take up much-needed bandwidth, and as a result, system hygiene gets sacrificed for the sake of keeping employees productive.
Putting off endpoint management, however, exposes corporate networks to enormous risks. Bad actors are well aware that endpoints are not being maintained at the same level as pre-pandemic, and they are more than willing to take advantage. Recent stats show that the volume of cyberattacks today is pretty staggering — much higher than prior to COVID-19.
Get thee to the cloud: Acceleration of modern device management
Because of bandwidth concerns, the pressure to trim costs, and the need to maintain machines in new ways, many enterprises are accelerating their move to the cloud. The cloud offers a lot of advantages for distributed workforces while also reducing costs. But digital transformation and the move to modern device management can’t happen overnight.
Enterprises have invested too much time, money, physical space and human resources to just walk away. Not to mention, on-premises environments have been highly reliable. Physical servers are one of the few things IT teams can count on to just work as intended these days.
Hybrid environments offer a happy medium. With the latest technology, enterprises can begin migrating to the cloud and adapt to changing conditions, meeting the needs of distributed teams. They can also save some money in the process. At the same time, they don’t have to completely abandon their tried-and-true servers.
Solving specific business problems: Content distribution to keep systems running
But what about those “specific business problems,” such as endpoint management and content distribution? Prior to COVID-19, this had been one of the biggest hurdles to digital transformation. It was not possible to distribute software and updates at scale without negatively impacting business processes and without excessive cost.
The issue escalated with the shift to remote work. Fortunately, technology providers have responded, developing solutions that leverage secure and efficient delivery mechanisms, such as peer-to-peer content distribution, that can work in the cloud. Even in legacy environments, vast improvements have been made to reduce bandwidth consumption.
These solutions allow enterprises to transition from a traditional on-premises infrastructure to the cloud and modern device management at their own speed, making their company more agile and resilient to the numerous risks they encounter today. Breakthrough technologies also support multiple system management platforms and help guarantee endpoints stay secure and updated even if corporate networks go down – something that, given the world we live in today, is a very real possibility.
Companies like Garmin and organizations such as the University of California San Francisco joined the unwitting victims of ransomware attacks in recent months. Their systems were seized, only to be released upon payment of millions of dollars.
While there is the obvious hard cost involved, there are severe operational costs as well — employees that can’t get on the network to do their jobs, systems must be scanned, updated and remediated to ensure the network isn’t further compromised, etc. A lot has to happen within a short period of time in the wake of a cyberattack to get people back to work as quickly and safely as possible.
Fortunately, with modern cloud-based content distribution solutions, all that is needed for systems to stay up is electricity and an internet connection. Massive redundancy is being built into the design of products to provide extreme resilience and help ensure business continuity in case part or all of the corporate network goes down.
The newest highly scalable, cloud-enabled content distribution options enable integration with products like Azure CDN and Azure Storage and also provide a single agent for migration to modern device management. With features like cloud integration, internet P2P, and predictive bandwidth harvesting, enterprises can leverage a massive amount of bandwidth from the internet to manage endpoints and ensure they always stay updated and secure.
Given these new developments precipitated and accelerated by COVID-19, as well as the clear, essential business problem these solutions address, expect to see movement and growth in the cloud sector. Expect to see an acceleration of modern device management, and despite IT spending cuts, expect to see a better, more secure and reliable, cost efficient, operationally efficient enterprise in the days to come.
COVID-19 changed the rules of the game virtually overnight. The news has covered the broader impacts of the pandemic, particularly the hit to our healthcare, the drops in our economy, and the changes in education.
But when a massive portion of our workforce was sent home, and companies moved operations online, no one thought about how vulnerable to cyberattacks those companies had now become. The attack surface had changed, giving malicious actors new inroads that no one had previously watched out for.
The thing is, cybersecurity isn’t a battle that’s ultimately won, but an ongoing game to play every day against attackers who want to take your systems down. We won’t find a one-size-fits-all solution for the vulnerabilities that were exposed by the pandemic. Instead, each company needs to charge the field and fend off their opponent based on the rules of play. Today, those rules are that anything connected to the internet is fair game for cybercriminals, and it’s on organizations to protect these digital assets.
COVID may have changed the rules, but the game is still on. Despite the security threat, this pandemic may have caused a massive opportunity for companies — if they’re willing to take it.
WFH isn’t new, but WFH suddenly, at scale, is
The attack surface changed — and so did the rules of the game.
A work-from-home world isn’t a new thing. Slow transitions to remote workplaces have become more of a norm, though pushes for all-remote workplaces come in cycles. In the past five to ten years, despite the rise of flexible work options and global teams, work still happened mainly in an office.
What is new is a massive amount of the workforce shifting to remote work nearly overnight. Suddenly, the internet became a company’s network—thousands of employees turned into thousands of individual offices. Secured networks were traded in for home Wi-Fi, and gaps and holes in an organization’s attack surface were introduced where they didn’t exist before.
That shift suddenly exposed vulnerabilities in the system, like older systems that were never updated, internet assets that were forgotten, and patches that never happened. These weak links are all the invitation a malicious adversary needs.
Rogue threats—web infrastructure created by criminals—changed, too. Phishing schemes suddenly took a new approach in the form of “COVID lures”: emails and ads that lead to questionable websites providing cure-alls for the virus, taking advantage of people’s increased fear and anxiety.
Attackers realized they had another advantage: employees responsible for diagnosing and fixing these kinds of security issues are now preoccupied with supporting family, supervising their kids’ remote education, or working long hours to cover other cuts. In other words, some of our players were benched.
Combine this easier access to enterprise systems with the increased willingness to hand over information and a drop in vigilance, and you can see how this all became a new kind of game. The good news is that although malicious actors seeking ways into these exposed systems are adapting, a company can adapt as well.
Going on the offensive
Companies can’t afford large-scale cyberattacks at any time, but especially right now. The pandemic has caused consumers who may have lost significant income to be picky with their purchases and investments. Companies need to be focused on retaining customer relationships so that they’ll weather the pandemic, and a take-down of the network could undercut customer trust in unrecoverable ways.
But many companies won’t take action. They may view their older systems as good enough to ride the wave to the other side of the pandemic, and once there, they’ll go back to what they had used before, unprepared for the next attack. They may get through, but nothing will have changed — things will not go back to how they were, and you will no longer be able to rely on systems that protected a pre-COVID world.
Now, there’s an opportunity to huddle up, form a new strategy, and go on the offensive. The pandemic can be an opportunity for businesses to take a look at their vulnerabilities, map their attack surface, and take appropriate actions to secure and strengthen their systems. We’ve seen this after other catastrophic events, such as after 9/11, when companies adopted new resiliency plans for any future recovery events. Companies have the same opportunity now.
Here are some things a company can do to ensure their systems are secure, even if they’ve been running a remote workforce for a while.
Invest in security teams
Companies who understand the value of keeping their systems secure and taking initiatives against potential leaks will want to invest in cybersecurity. Shore up the team and make new hires if needed. Overall, companies have been supportive of their security teams during this time, but if security isn’t a priority, make it one.
Map the attack surface
The quick move to remote work probably meant a fast rollout of new initiatives and quickly standing up new equipment, which means mistakes are the leading cause of a breach. Do an audit of your attack surface to uncover hidden failures and where older systems, forgotten assets, or unpatched issues are creating vulnerabilities.
Ask questions about what changed: What programs were canceled or altered? How are resources shifting around? Can new assets be secured before they roll out? Also, do some threat modeling with your team. Ask what a threat actor would do to attack your systems, or where they would gain a foothold. In other words, anticipate the opposing team’s next move.
Even the best companies miss something, but the more you can anticipate, the better. Then prepare a response plan for investigating attacks quickly, develop a triage system, create a playbook, and run drills so your players know their roles.
Update the old and roll out the new
Now that you’re learning the new rules of the game, can visualize the playing field and anticipate the opposing team’s next move, it’s time to act. Update older systems or trade them in for new ones. Patch security holes. Shrink the attack surface. Roll out new digital initiatives you might have been sitting on.
Finally, create that mobile app. Move to the cloud. Find new digital ways to engage with your customers, since it may be a while before in-store foot traffic returns. As you do this, you may come to realize that your systems were set up in such a way that you need to start over. In that case, do it. Now’s the time.
Support your team
Above all, make sure you have the right team in place, and take care of them. Get them the resources and information they need as they audit, patch, and put new protocols in place for the future.
Communicate with both them and your leadership team to keep everyone informed, and if you think you’re too busy, communicate even more like teammates would on the field. Hedge against burnout. Above all, give your team the time and space they need to find the holes and make the fixes.
Live to play another day
In many ways, this shift to digital has been in progress for a long time. However, because it was never a necessity, the transformation lagged or stalled from a lack of resources and was moved down the priorities list. But today we see stalled-out initiatives finally being implemented. The plans have been in place, and COVID is now forcing us to get it done.
Researchers at the University of Rochester and Cornell University have taken an important step toward developing a communications network that exchanges information across long distances by using photons, mass-less measures of light that are key elements of quantum computing and quantum communications systems.
Each pillar serves as a location marker for a quantum state that can interact with photons. Credit: University of Rochester illustration / Michael Osadciw
The research team has designed a nanoscale node made out of magnetic and semiconducting materials that could interact with other nodes, using laser light to emit and accept photons.
The development of such a quantum network -designed to take advantage of the physical properties of light and matter characterized by quantum mechanics – promises faster, more efficient ways to communicate, compute, and detect objects and materials as compared to networks currently used for computing and communications.
The node consists of an array of pillars a mere 120 nanometers high. The pillars are part of a platform containing atomically thin layers of semiconductor and magnetic materials.
The array is engineered so that each pillar serves as a location marker for a quantum state that can interact with photons and the associated photons can potentially interact with other locations across the device–and with similar arrays at other locations.
This potential to connect quantum nodes across a remote network capitalizes on the concept of entanglement, a phenomenon of quantum mechanics that, at its very basic level, describes how the properties of particles are connected at the subatomic level.
“This is the beginnings of having a kind of register, if you like, where different spatial locations can store information and interact with photons,” says Nick Vamivakas, professor of quantum optics and quantum physics at Rochester.
Toward ‘miniaturizing a quantum computer’
The project builds on work the Vamivakas Lab has conducted in recent years using tungsten diselenide (WSe2) in so-called Van der Waals heterostructures. That work uses layers of atomically thin materials on top of each other to create or capture single photons.
The new device uses a novel alignment of WSe2 draped over the pillars with an underlying, highly reactive layer of chromium triiodide (CrI3). Where the atomically thin, 12-micron area layers touch, the CrI3 imparts an electric charge to the WSe2, creating a “hole” alongside each of the pillars.
In quantum physics, a hole is characterized by the absence of an electron. Each positively charged hole also has a binary north/south magnetic property associated with it, so that each is also a nanomagnet
When the device is bathed in laser light, further reactions occur, turning the nanomagnets into individual optically active spin arrays that emit and interact with photons. Whereas classical information processing deals in bits that have values of either 0 or 1, spin states can encode both 0 and 1 at the same time, expanding the possibilities for information processing.
“Being able to control hole spin orientation using ultrathin and 12-micron large CrI3, replaces the need for using external magnetic fields from gigantic magnetic coils akin to those used in MRI systems,” says lead author and graduate student Arunabh Mukherjee. “This will go a long way in miniaturizing a quantum computer based on single hole spins. ”
Still to come: Entanglement at a distance?
Two major challenges confronted the researchers in creating the device.
One was creating an inert environment in which to work with the highly reactive CrI3. This was where the collaboration with Cornell University came into play.
“They have a lot of expertise with the chromium triiodide and since we were working with that for the first time, we coordinated with them on that aspect of it,” Vamivakas says. For example, fabrication of the CrI3 was done in nitrogen-filled glove boxes to avoid oxygen and moisture degradation.
The other challenge was determining just the right configuration of pillars to ensure that the holes and spin valleys associated with each pillar could be properly registered to eventually link to other nodes.
And therein lies the next major challenge: finding a way to send photons long distances through an optical fiber to other nodes, while preserving their properties of entanglement.
“We haven’t yet engineered the device to promote that kind of behavior,” Vamivakas says. “That’s down the road.”
Fake news detectors, which have been deployed by social media platforms like Twitter and Facebook to add warnings to misleading posts, have traditionally flagged online articles as false based on the story’s headline or content.
However, recent approaches have considered other signals, such as network features and user engagements, in addition to the story’s content to boost their accuracies.
Fake news detectors manipulated through user comments
However, new research from a team at Penn State’s College of Information Sciences and Technology shows how these fake news detectors can be manipulated through user comments to flag true news as false and false news as true. This attack approach could give adversaries the ability to influence the detector’s assessment of the story even if they are not the story’s original author.
“Our model does not require the adversaries to modify the target article’s title or content,” explained Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”
That is, instead of fooling the detector by attacking the story’s content or source, commenters can attack the detector itself.
The researchers developed a framework – called Malcom – to generate, optimize, and add malicious comments that were readable and relevant to the article in an effort to fool the detector.
Then, they assessed the quality of the artificially generated comments by seeing if humans could differentiate them from those generated by real users. Finally, they tested Malcom’s performance on several popular fake news detectors.
Malcom performed better than the baseline for existing models by fooling five of the leading neural network based fake news detectors more than 93% of the time. To the researchers’ knowledge, this is the first model to attack fake news detectors using this method.
This approach could be appealing to attackers because they do not need to follow traditional steps of spreading fake news, which primarily involves owning the content.
The researchers hope their work will help those charged with creating fake news detectors to develop more robust models and strengthen methods to detect and filter-out malicious comments, ultimately helping readers get accurate information to make informed decisions.
“Fake news has been promoted with deliberate intention to widen political divides, to undermine citizens’ confidence in public figures, and even to create confusion and doubts among communities,” the team wrote in their paper.
Added Le, “Our research illustrates that attackers can exploit this dependency on users’ engagement to fool the detection models by posting malicious comments on online articles, and it highlights the importance of having robust fake news detection models that can defend against adversarial attacks.”
The global number of industrial IoT connections will increase from 17.7 billion in 2020 to 36.8 billion in 2025, representing an overall growth rate of 107%, Juniper Research found.
The research identified smart manufacturing as a key growth sector of the industrial IoT market over the next five years, accounting for 22 billion connections by 2025.
The research predicted that 5G and LPWA (Low Power Wide Area) networks will play pivotal roles in creating attractive service offerings to the manufacturing industry, and enabling the realisation of the ‘smart factory’ concept, in which real-time data transmission and high connection densities allow highly-autonomous operations for manufacturers.
5G to maximise benefits of smart factories
The report identified private 5G services as crucial to maximising the value of a smart factory to service users, by leveraging the technology to enable superior levels of autonomy amongst operations.
It found that private 5G networks will prove most valuable when used for the transmission of large amounts of data in environments with a high density of connections, and where significant levels of data are generated. In turn, this will enable large-scale manufacturers to reduce operational spend through efficiency gains.
Software revenue to dominate industrial IoT market value
The research forecasts that over 80% of global industrial IoT market value will be attributable to software spend by 2025, reaching $216 billion. Software tools leveraging machine learning for enhanced data analysis and the identification of network vulnerabilities are now essential to connected manufacturing operations.
Research author Scarlett Woodford noted: “Manufacturers must exercise caution when implementing IoT technology, resisting the temptation to introduce connectivity to all aspects of operations. Instead, manufacturers must focus on the collection of data on the most valuable areas to drive efficiency gains.”
The COVID-19 pandemic has not impacted the adoption of zero trust technology globally, a Pulse Secure report reveals. In fact, 60% of organizations said they have accelerated zero trust implementation during the pandemic.
The report surveyed more than 250 technology professionals. The newly published report examines how enterprises are moving forward with zero trust networking initiatives, where they’re being successful in doing so and how COVID-19 has affected the forward movement of those projects.
Formalized zero trust projects putting orgs ahead of the DX curve
The research found that the main difference between those who were successful in moving their zero trust initiatives forward were those that started out with formalized zero trust projects.
Those that had dedicated budgets and formal initiatives (69%) were far more likely to continue accelerating those projects throughout the pandemic, while those that had ad hoc zero trust projects were more likely to stall progress or stop entirely.
“The global pandemic has had some profound effects on the enterprise – with remote working being rolled out on an unprecedented scale, increased leverage of cloud resources and applications, and the transition to greater workplace flexibility,” said Scott Gordon, CMO at Pulse Secure.
“The findings indicate that organizations that advance their initiatives and planning towards zero trust process and technology implementation will be ahead of the digital transformation curve and much more resilient to threats and crises.”
The research went further into enterprises’ efforts to bring about zero trust networking in their environments. 85% of respondents have defined zero trust initiatives. However, 42% have received added budget for their projects. The projects that did receive added budget were more likely to persist through the pandemic.
Enterprises were overwhelmingly positive about their success in pursuing zero trust networking, with 94% indicating degrees of success; 50% labeled their efforts as successful and 44% of respondents indicating somewhat successful.
Bringing together security and networking teams
Dedicated zero trust projects tend to be interdisciplinary, bringing together security and networking teams. In 45% of such projects, security and networking teams have a zero trust partnership in which they formally share tools and processes. In 50% of cases, enterprises created a taskforce from both teams to pursue zero trust.
The three primary ways in which they collaborated were by coordinating access security controls across different systems (48%), assessing access security control requirements (41%) and defining access requirements according to user, role, data, and application (40%).
However, the survey found that collaboration is not without its own roadblocks. 85% of respondents in zero trust taskforces and partnerships found themselves struggling with cross-team skills gaps (33%), a lack of tools and processes that might facilitate collaboration (31%), and budget conflicts (31%).
“The survey shows that organizations that move forward with formal initiatives and budget are more likely to achieve implementation success and operational gain. We appreciate Pulse Secure’s support and sponsorship of this report that organizations can use to benchmark and progress their zero trust programs.”
Additional key findings
- Prime zero trust benefits: When asked what they consider to be the prime benefit of zero trust networks, IT operations agility (40%), improved governance risk and compliance (35%), breach prevention (34%), reducing the attack surface (31%), and unauthorized access mitigation (28%) ranked among the strongest responses.
- Hybrid IT remote access: Respondents are applying hybrid IT requirements to Secure Remote Access requirements within their zero trust network strategy, while 62% wanted cloud application access, half of enterprises access to public and private cloud resources and applications.
- IoT device exposures: Respondents discussed their position towards IoT devices which cannot be provided with the user identities on which zero trust is based and how they intend to create access policies for them. 36% said that devices would receive tailored access privileges based on function and characteristics; others said that all devices would receive a generic minimum level of access privileges (28%) and that untrusted devices would have limited network access with no access to high risk or compliance zones (23%).
The importance of applications and digital services has skyrocketed in 2020. Connectivity and resilience are imperative to keeping people connected and business moving forward. Visibility into network traffic, especially in distributed edge environments and with malicious attacks on the rise, is a critical part of ensuring uptime and performance.
“NS1 created pktvisor to address our need for more visibility across our global anycast network,” said Shannon Weyrick, VP of architecture at NS1. “By efficiently summarizing and collecting key metrics at all of our edge locations we gain a deep understanding of traffic patterns in real time, enabling rich visualization and fast automation which further increase our resiliency and performance. We are big users of and believers in open source software. As this tool will benefit other organizations leveraging distributed edge architectures, we’ve made it open and we invite the developer community to help drive future updates and innovation.”
More about pktvisor
Pktvisor summarizes network traffic in real time directly on edge nodes with Apache data sketches. The summary information may be visualized locally via the included CLI UI, and simultaneously centrally collected via HTTP to your time series database of choice, to drive global visualizations and automation.
- Packet counts and rates (w/percentiles), breakdown by ingress/egress, protocol
- DNS counts and rates, breakdown by protocol, response code
- Cardinality: Source and destination IP, DNS Qname
- DNS transaction timings (w/percentiles)
- Top 10 heavy hitters for IPs and ports; DNS Qnames, Qtypes, Result Codes; slow DNS transactions, NX, SRVFAIL, REFUSED Qnames; and GeoIP and ASN
The metrics pktvisor provides can help network and security teams by supplementing existing metrics to help understand traffic patterns, identify attacks, and gather information on how to mitigate them.
Available as a Docker container, it is easy to install and has low network and storage requirements. Due to its summarizing design, the amount of data collected is a function of the number of hosts being collected, not a function of traffic rates, so spikes or even DDoS attacks will not affect downstream collection systems.
Federal IT leaders across the country voiced the importance of network visibility in managing and securing their agencies’ increasingly complex and hybrid networks, according to Riverbed.
Of 200 participating federal government IT decision makers and influencers, 90 percent consider their networks to be moderately-to-highly complex, and 32 percent say that increasing network complexity is the greatest challenge an IT professional without visibility faces in their agency when managing the network.
Driving this network complexity are Cloud First and Cloud Smart initiatives that make it an imperative for federal IT to modernize its infrastructure with cloud transformation and “as-a-service” adoption.
More than 25 percent of respondents are still in the planning stages of their priority modernization projects, though 87 percent of survey respondents recognize that network visibility is a strong or moderate enabler of cloud infrastructure.
Network visibility can help expedite the evaluation process to determine what goes onto an agency’s cloud and what data and apps stay on-prem; it also allows clearer, ongoing management across the networks to enable smooth transitions to cloud, multi-cloud and hybrid infrastructures.
Accelerated move to cloud
The COVID-19 has further accelerated modernization and cloud adoption to support the massive shift of the federal workforce to telework – a recent Market Connections study indicates that 90 percent of federal employees are currently teleworking and that 86 percent expect to continue to do so at least part-time after the pandemic ends.
The rapid adoption of cloud-based services and solutions and an explosion of new endpoints accessing agency networks during the pandemic generated an even greater need for visibility into the who, what, when and where of traffic. In fact, 81 percent of survey respondents noted that the increasing use of telework accelerated their agency’s use and deployment of network visibility solutions, with 25 percent responding “greatly.”
“The accelerated move to cloud was necessary because the majority of federal staff were no longer on-prem, creating significant potential for disruption to citizen services and mission delivery,” said Marlin McFate, public sector CTO at Riverbed.
“This basically took IT teams from being able to see, to being blind. All of their users were now outside of their protected environments, and they no longer had control over the internet connections, the networks employees were logging on from or who or what else had access to those networks. To be able to securely maintain networks and manage end-user experience, you have to have greater visibility.”
Visibility drives security
Lack of visibility into agency networks and the proliferation of apps and endpoints designed to improve productivity and collaboration expands the potential attack surface for cyberthreats.
Ninety-three percent of respondents believe that greater network visibility facilitates greater network security and 96 percent believe network visibility is moderately or highly valuable in assuring secure infrastructure.
Further, respondents ranked cybersecurity as their agency’s number one priority that can be improved through better network visibility, and automated threat detection was identified as the most important feature of a network visibility solution (24 percent), followed by advanced reporting features (14 percent), and automated alerting (13 percent).
“Network visibility is the foundation of cybersecurity and federal agencies have to know what’s on their network so they can rapidly detect and remediate malicious actors. And while automation enablement calls for an upfront time investment, it can significantly improve response time not only for cyber threat detection but also network issues that can hit employee productivity,” concluded McFate.
Corporate WANs are failing to deliver on businesses’ priorities, with 55% of respondents citing security is the biggest pain point, 43% service flexibility, 36% supplier performance, and 35% network congestion, according to a survey from Telia Carrier.
The research was conducted in four of the world’s biggest markets – the US, the UK, Germany and France – and provides insights into the evolution of the corporate WAN and cloud adoption from the top of business.
Digital technology and the cloud have transformed the way businesses are run and connect with their employees, suppliers, partners, and customers — across sites and geographies. Public internet and cloud-based services underpin the corporate WAN landscape and reliable connectivity is seen as critical to business performance.
With 90% of the survey’s respondents confirming that their enterprises rely on the public internet for some or all of their wide area network services, 48% of them say the impact of a corporate WAN outage exceeding 24 hours would be catastrophic.
Today’s enterprise: Connected but uninformed?
However, as the research findings reveal, the corporate WAN experience is not yet the best it could, and should, be. This is not just because WAN technology is still evolving and suppliers need to improve their customer experience, but also because the WAN ecosystem hasn’t been fully understood: knowledge gaps about the internet and its various tiers have made decision-making difficult.
For example, only half of survey respondents (US: 57%; FR: 56%; UK: 49%; DE: 37%) rate their understanding of how the internet backbone works as very good or excellent, but almost two-thirds think of public internet connectivity as a commodity that doesn’t vary much between suppliers. (FR: 74%; DE: 62%; US: 62%; UK: 49%).
Commenting on the findings of the research, Mattias Fridström, Chief Evangelist, Telia Carrier said: “Network-development strategies, unfortunately, appear to be missing the backbone piece of the puzzle. This means that Tier 1 suppliers, such as telcos and carriers, are often overlooked when it comes to choosing a method to build their WANs and connect to the cloud.”
Tomorrow’s supplier: Flexible, innovative and customer-focused
The research illustrates that the network providers of the future have to put the needs of the customer at the center of everything they do. Bandwidth (40%), service flexibility (36%) and customer support (29%) are enterprises’ top three priorities when deciding on a local network partner or ISP to connect to their preferred cloud-service providers.
Sustainability is also a key criterion when shortlisting suppliers or choosing between candidates, and enterprises are prepared to pay a premium for it. In fact, 38% of all respondents confirmed that they now only shortlist suppliers with a strong commitment to sustainability – in France, this number rises to 55%. Of those who don’t include sustainability in their initial selection criteria, 42% say it helps them choose between the final candidates (US: 46%; UK & DE: 45%; FR: 28%).
Only a fifth say they choose suppliers solely on the basis of price and performance. Importantly, 95% are willing to pay a premium for a sustainable supplier of 5% or more. 49% of respondents in Germany, 48% in the UK, 42% in the US and 37% in France confirmed their commitment to paying between 10% and 15% more.
The survey also found that demand for new tools and technologies to improve workflows and increase transparency is strong. For example, 90% would like their network partners to adopt more machine-to-machine workflows and automation to enhance their services, and 68% say they already use APIs to achieve real-time visibility of their network performance or control of their network infrastructure.
“If organizations really want to create the networks that transform their businesses, whilst controlling costs and reducing their carbon footprint,” Fridström concluded, “their leaders may need to review their strategies for the next three to five years. Network providers can be strategic partners in the growth and development of enterprises—if they’re aligned with enterprises’ needs.”
Positive Technologies performed instrumental scanning of the network perimeter of selected corporate information systems. A total of 3,514 hosts were scanned, including network devices, servers, and workstations.
The results show the presence of high-risk vulnerabilities at most companies. However, half of these vulnerabilities can be eliminated by installing the latest software updates.
The research shows high-risk vulnerabilities at 84% of companies across finance, manufacturing, IT, retail, government, telecoms and advertising. One or more hosts with a high-risk vulnerability having a publicly available exploit are present at 58% of companies.
Publicly available exploits exist for 10% of the vulnerabilities found, which means attackers can exploit them even if they don’t have professional programming skills or experience in reverse engineering. However, half of the vulnerabilities can be eliminated by installing the latest software updates.
The detected vulnerabilities are caused by the absence of recent software updates, outdated algorithms and protocols, configuration flaws, mistakes in web application code, and accounts with weak and default passwords.
Vulnerabilities can be fixed by installing the latest software versions
As part of the automated security assessment of the network perimeter, 47% of detected vulnerabilities can be fixed by installing the latest software versions.
All companies had problems with keeping software up to date. At 42% of them, PT found software for which the developer had announced the end of life and stopped releasing security updates. The oldest vulnerability found in automated analysis was 16 years old.
Analysis revealed remote access and administration interfaces, such as Secure Shell (SSH), Remote Desktop Protocol (RDP), and Network Virtual Terminal Protocol (Internet) TELNET. These interfaces allow any external attacker to conduct bruteforce attacks.
Attackers can bruteforce weak passwords in a matter of minutes and then obtain access to network equipment with the privileges of the corresponding user before proceeding to develop the attack further.
Ekaterina Kilyusheva, Head of Information Security Analytics Research Group of Positive Technologies said: “Network perimeters of most tested corporate information systems remain extremely vulnerable to external attacks.
“Our automated security assessment proved that all companies have network services available for connection on their network perimeter, allowing hackers to exploit software vulnerabilities and bruteforce credentials to these services.
Minimizing the number of services on the network perimeter is recommended
Kilyusheva continued: “At most of the companies, experts found accessible web services, remote administration interfaces, and email and file services on the network perimeter. Most companies also had external-facing resources with arbitrary code execution or privilege escalation vulnerabilities.
“With maximum privileges, attackers can edit and delete any information on the host, which creates a risk of DoS attacks. On web servers, these vulnerabilities may also lead to defacement, unauthorized database access, and attacks on clients. In addition, attackers can pivot to target other hosts on the network.
“We recommend minimizing the number of services on the network perimeter and making sure that accessible interfaces truly need to be available from the Internet. If this is the case, it is recommended to ensure that they are configured securely, and businesses install updates to patch any known vulnerabilities.
“Vulnerability management is a complex task that requires proper instrumental solutions,” Kilyusheva added. “With modern security analysis tools, companies can automate resource inventories and vulnerability searches, and also assess security policy compliance across the entire infrastructure. Automated scanning is only the first step toward achieving an acceptable level of security. To get a complete picture, it is vital to combine automated scanning with penetration testing. Subsequent steps should include verification, triage, and remediation of risks and their causes.”
Healthcare delivery organizations (HDOs) have been busy increasing their network and systems security in the last year, though there is still much room for improvement, according to Forescout researchers.
This is the good news: the percentage of devices running Windows unsupported operating systems fell from 71% in 2019 to 32% in 2020 and there have been improvements when it comes to timely patching and network segmentation.
The bad news? Some network segmentation issues still crop up and HDOs still use insecure protocols for both medical and non-medical network communications, as well as for external communications.
Based on two data sources – an analysis of network traffic from five large hospitals and clinics and the Forescout Device Cloud (containing data for some 3.3 million devices in hundreds of healthcare networks) – the researchers found that, between April 2019 and April 2020:
- The percentage of devices running versions of Windows OS that will be supported for more than a year jumped from 29% to 68% and the percentage of devices running Windows OS versions supported via ESU fell from 71% to 32%. Unfortunately, the percentage of devices running Windows OSes like Windows XP and Windows Server 2003 remained constant (though small)
- There was a decided increase in network segmentation
Unfortunately, most network segments (VLANs) still have a mix of healthcare devices and IT devices or healthcare equipment, personal, and OT devices, or mix sensitive and vulnerable devices.
As far as communication protocols are concerned, they found that:
- 4 out of the 5 HDOs were communicating between public and private IP addresses using a medical protocol, HL7, that transports medical information in clear text
- 2 out of the 5 HDOs allowed medical devices to communicate over IT protocols with external servers reachable from outside the HDO’s perimeter
- All HDOs used obsolete versions of communication protocols, internally and externally (e.g., SSLv3, TLSv1.0, and TLSv1.1, SNMP v1 and 2, NTP v1 and 2, Telnet)
- Many of the medical and proprietary protocols used by medical equipment lack encryption and authentication, or don’t enforce its usage (e.g., HL7, DICOM, POCT01, LIS02). OT and IoT devices in use also have a similar problem
That’s all a big deal, because attacks exploiting these security vulnerabilities could do a lot of damage, including stealing patients’ information, altering it, disrupting the normal behavior of medical devices, disrupting the normal functioning of the entire organization (e.g., via a ransomware attack), etc.
Defense strategies for better healthcare network security
The researchers advised HDOs’ cyber defenders to:
- Find a way to “see” all the devices on the network, whether they comply with company policies, and detect malicious network behavior they may exhibit
- Identify and remediate weak and default passwords
- Map the network flow of existing communications to help identify unintended external communications, prevent medical data from being exposed publicly, and to detect the use of insecure protocols
- Improve segmentation of devices (e.g., isolate fragile legacy applications and operating systems, segment groups of devices according to their purpose, etc.)
“Whenever possible, switch to using encrypted versions of protocols and eliminate the usage of insecure, clear-text protocols such as Telnet. When this is not possible, use segmentation for zoning and risk mitigation,” they noted.
They also warned about the danger of over-segmentation.
“Segmentation requires well-defined trust zones based on device identity, risk profiles and compliance requirements for it to be effective in reducing the attack surface and minimizing blast radius. Over-segmentation with poorly defined zones simply increases complexity without tangible security benefits,” they concluded.
Operator‑billed revenue from 5G connections will reach $357 billion by 2025, rising from $5 billion in 2020, its first full year of commercial service, according to Juniper Research.
By 2025, 5G revenue is anticipated to represent 44% of global operator‑billed revenue owing to rapid migration of 4G mobile subscribers to 5G networks and new business use cases enabled by 5G technology.
However, the study identified 5G networks roll-outs as highly resilient to the COVID-19 pandemic. It found that supply chain disruptions caused by the initial pandemic period have been mitigated through modified physical roll-out procedures, in order to maintain the momentum of hardware deployments.
5G connections to generate 250% more revenue than average cellular connection
The study found that 5G uptake had surpassed initial expectations, predicting total 5G connections will surpass 1.5 billion by 2025. It also forecast that the average 5G connection will generate 250% more revenue than an average cellular connection by 2025.
To secure a return on investment into new services, such as uRLLC (Ultra-Reliable Low-Latency Communication) and network slicing, enabled by 5G, operators will apply this premium pricing for 5G connections.
However, these services alongside the high-bandwidth capabilities of 5G will create data-intensive use cases that lead to a 270% growth in data traffic generated by all cellular connections over the next five years.
Networks must increase virtualisation to handle 5G data traffic
Operators must use future launches of standalone 5G network as an opportunity to further increase virtualisation in core networks. Failure to develop 5G network architectures that handle increasing traffic will lead to reduced network functionality, inevitably leading to a diminished value proposition of its 5G network amongst end users.
Research author Sam Barker remarked: “Operators will compete on 5G capabilities, in terms of bandwidth and latency. A lesser 5G offering will lead to user churn to competing networks and missed opportunities in operators’ fastest-growing revenue stream.”
Remote work has left many organizations lagging in productivity and revenue due to remote access solutions. 19% of IT leaders surveyed said they often or always experience network performance and latency issues when using legacy remote access solutions, with an additional 43% saying they sometimes do.
Those issues have resulted in a loss of productivity for 68% of respondents and a loss of revenue for 43%, a Perimeter 81 report reveals.
According to the report, organizations securely connect to internal networks in a variety of ways when working remotely. Some 66% reported using VPNs, 58% said they use a cloud service through a web browser, 48% rely on a remote access solution, and 34% use a firewall.
The many organizations still using legacy solutions like VPNs and firewalls will struggle to scale, face bottlenecks, and lack network visibility.
security solutions and remote work
33% of respondents said a password is the only way they authenticate themselves to gain access to systems. And while 62% of IT managers said they are using cloud-based security solutions to secure remote access, 49% said they’re still using a firewall, and 41% a hardware VPN.
But there are signs of progress, as organizations increasingly favor modern cloud-based solutions over outdated legacy solutions. Following the pandemic and a switch to remote work, 72% of respondents said they’re very or completely likely to increase adoption of cloud-based security solutions, 38% higher than before the pandemic.
“It’s no surprise that companies are increasingly moving to cloud-based cyber and network security platforms. As corporations of all sizes rely on the cloud to run their businesses, they need new ways of consuming security to effectively prevent cyberattacks regardless of their location or network environment.”
Other key findings
- 74% of respondents are adopting cloud-based security solutions over hardware due to security concerns. 44% are doing so due to scalability concerns, and 43% cited time-saving considerations.
- 61% of organizations believe that having to protect new devices is the greatest security concern in light of remote work, while 56% said their greatest concern was lack of visibility into remote user activity.
- 39% of respondents reported that scalability is their greatest challenge in securing the remote workforce, while 38% said budget allocation was their greatest challenge.
While there has been a year-over-year decrease in publicly disclosed data breaches, an Arctic Wolf report reveals that the number of corporate credentials with plaintext passwords on the dark web has increased by 429 percent since March.
For a typical organization, this means there are now, on average, 17 sets of corporate credentials available on the dark web that could be used by hackers.
With access to just one corporate account, attackers can easily execute account takeover attacks, which allow them to move laterally within an organization’s corporate network and gain access to sensitive data, intellectual property, competitive information, or funds.
Cybersecurity incidents now occur after hours
The sharp increase in corporate credential leaks underscores the need for organizations to have dedicated 24×7 monitoring of their network, endpoint, and cloud environments in order to prevent targeted attacks that could happen at any time.
Of the high-risk security incidents observed, 35% occur between the hours of 8:00 PM and 8:00 AM, and 14% occur on weekends; times when many in-house security teams are not online.
“The cybersecurity industry has an effectiveness problem. Every year new technologies, vendors, and solutions emerge. Yet, despite this constant innovation, we continue to see breaches in the headlines.
“The only way to eliminate cybersecurity challenges like ransomware, account takeover attacks, and cloud misconfigurations is by embracing security operations capabilities that fully integrate people, processes, and technology,” said Mark Manglicmot, VP Security Services, Arctic Wolf.
COVID-19 increasing the number of security operations challenges
- A 64 percent increase in phishing and ransomware attempts – Hackers have created new phishing lures around COVID-19 topics and adapted traditional lures seeking to take advantage of remote workers.
- Critical vulnerability patch time has increased by 40 days – A combination of higher common vulnerabilities and exposures (CVE) volumes, more critical CVEs, and the emergence of a remote workforce have significantly slowed the patching programs at many organizations.
- Unsecured Wi-Fi usage is up by over 240 percent – Remote workforces connecting to open and unsecured Wi-Fi networks outside of their office or home are now facing increased risks of malware exposure, credential theft, and browser session hijacking.
Cybersecurity: Main focus for planned projects
IT leaders also revealed that adapting culture quickly to new ways of working is the number one challenge they need to overcome in the next 12 months. The findings are unveiled following a survey of 600+ attendees for the upcoming DTX: NOW event.
26 percent of respondents cited cybersecurity as the main focus for planned projects, followed by cloud (21 percent), data analytics (15 percent) and network infrastructure (14 percent). According to separate research there were more hands-on-keyboard intrusions in the first half of 2020 that in the entirety of 2019.
IT leaders revealed that adapting digital culture for a new world of work was the main challenge they need to overcome in the next year (18 percent), followed by automation of business tasks and processes (14 percent), and choosing the right cloud strategy (12 percent).
Most significant barriers to digital transformation projects
The biggest barriers to delivering digital transformation projects on time and on budget reflect changing organizational dynamics that are being intensified by COVID-19. The most significant barrier to projects was revealed to be changing scope (29 percent of respondents), reduced budgets (24 percent) and changing team structure (17 percent).
The data also indicates that digital transformation has become a priority for businesses of every size. 58 percent of projects are anticipated to come in at less than £250,000, and just 22 percent have a budget of over £500,000 and 10 percent over £1 million.
“COVID-19 is a catalyst for digital transformation, but it’s a leveller too. We’re hearing from IT leaders that there is a shift in which technologies businesses are investing in.
“Ensuring the vast majority of employees could work from home practically overnight has exposed issues with IT strategy, and modernising the core tech stack has become an immediate priority for just about every organization”, said James McGough, managing director of Imago Techmedia.
“Many businesses have found that areas like cybersecurity measures, network infrastructure and cloud strategy need urgent adaptation for a distributed workforce.
“Some companies might be in a position to consider the likes of AI, blockchain and quantum computing, but the reality for most is that the future-looking, big ticket tech projects are on the back burner for now. Companies of every size are finding themselves restarting their digital transformation journeys,” McGough concluded.
80% of organizations experienced a cybersecurity breach that originated from vulnerabilities in their vendor ecosystem in the past 12 months, and the average organization had been breached in this way 2.7 times, according to a BlueVoyant survey.
The research also found organizations are experiencing multiple pain points across their cyber risk management program as they aim to mitigate risk across a network that typically encompasses 1409 vendors.
The study was conducted by Opinion Matters and recorded the views and experiences of 1505 CIOs, CISOs and Chief Procurement Officers in organizations with more than 1000 employees across a range of vertical sectors including business and professional services, financial services, healthcare and pharmaceutical, manufacturing, utilities and energy. It covered five countries: USA, UK, Mexico, Switzerland and Singapore.
Third-party cyber risk budgets and other key findings
- 29% say they have no way of knowing if cyber risk emerges in a third-party vendor
- 22.5% monitor their entire supply chain
- 32% only re-assess and report their vendor’s cyber risk position either six-monthly or less frequently
- The average headcount in internal and external cyber risk management teams is 12
- 81% say that budgets for third-party cyber risk management are increasing, by an average figure of 40%
Commenting on the research findings, Jim Penrose, COO BlueVoyant, said: “That four in five organizations have experienced recent cybersecurity breaches originating in their vendor ecosystem is of huge concern.
“The research clearly indicated the reasons behind this high breach frequency: only 23% are monitoring all suppliers, meaning 77% have limited visibility and almost one-third only re-assess their vendors’ cyber risk position six-monthly or annually. That means in the intervening period they are effectively flying blind to risks that could emerge at any moment in the prevailing cyber threat environment.”
Multiple pain points exist in third-party cyber risk programs as budgets rise
Further insight into the difficulties that are leading to breaches was revealed when respondents were asked to identify the top three pain points related to their third-party cyber risk programs, in the past 12 months.
The most common problems were:
- Managing the volume of alerts generated by the program
- Working with suppliers to improve security performance, and
- Prioritizing which risks are urgent and which are not.
However, overall responses were almost equally spread across thirteen different areas of concern. In response to these issues, budgets for third-party cyber risk programs are set to rise in the coming year. 81% of survey respondents said they expect to see budgets increase, by 40% on average.
Jim Penrose continues: “The fact that cyber risk management professionals are reporting difficulties across the board shows the complexity they face in trying to improve performance.
“It is encouraging that budget is being committed to tackling the problem, but with so many issues to solve many organizations will find it hard to know where to start. Certainly, the current approach is not working, so simply trying to do more of the same will not shift the dial on third-party cyber risk.”
Variation across industry sectors
Analysis of the responses from different commercial sectors revealed considerable variations in their experiences of third-party cyber risk. The business services sector is suffering the highest rate of breaches, with 89% saying they have been breached via a weakness in a third-party in the past 12 months.
The average number of incidents experienced in the past 12 months was also highest in this sector, at 3.6. This is undoubtedly partly down to the fact that firms in the sector reported working with 2572 vendors, on average.
In contrast, only 57% of respondents from the manufacturing sector said they had suffered third-party cyber breaches in the past 12 months. The sector works with 1325 vendors on average, but had a much lower breach frequency, at 1.7.
“Thirteen percent of respondents from the manufacturing sector also reported having no pain points in their third-party cyber risk management programs, a percentage more than twice as high as any other sector.
Commenting on the stark differences observed between sectors, Jim Penrose said: “This underlines that there is no one-size-fits-all solution to managing third-party cyber risk.
“Different industries have different needs and are at varying stages of maturity in their cyber risk management programs. This must be factored into attempts to improve performance so that investment is directed where it has the greatest impact.”
Mix of tools and tactics in play
The survey investigated the tools organizations have in place to implement third-party cyber risk management and found a mix of approaches with no single approach dominating.
Many organizations are evolving towards a data-driven strategy, with supplier risk data and analytics in use by 40%. However static, point-in-time tactics such as on-site audits and supplier questionnaires remain common.
Jim Penrose concludes: “Overall the research findings indicate a situation where the large scale of vendor ecosystems and the fast-changing threat environment is defeating attempts to effectively manage third-party cyber risk in a meaningful way.
“Visibility into such a large and heterogenous group of vendors is obscured due to lack of resources and a continuing reliance on manual, point-in-time processes, meaning real-time emerging cyber risk is invisible for much of the time.
“For organizations to make meaningful progress in managing third-party cyber risk and reduce the current concerning rate of breaches, they need to be pursuing greater visibility across their vendor ecosystem and achieving better context around alerts so they can be prioritized, triaged and quickly remediated with suppliers.”
Business support systems (BSS) are necessary to provide the fast-changing requirements in 5G and enhance customer experiences, a Frost & Sullivan research reveals.
They also help communication service providers (CSPs) deliver personalized service experiences for consumers and businesses.
BSS market could experience a slowdown
Vendors have introduced advanced BSS features, including the ability to support flexible deployments (core and edge) and options for network slice lifecycle management, which are critical in helping CSPs deliver on multi-partner business models.
However, due to COVID-19, the global BSS market is estimated to experience a slowdown in the short term, whereas the long-term outlook remains positive.
“It is evident that BSS can significantly drive efforts to help organizations address key concerns such as introducing digital services and enabling customers to personalize their service experience,” said Vikrant Gandhi, Senior Industry Director, Information & Communication Technologies at Frost & Sullivan.
“However, businesses from across many other industry verticals are still relatively early in their digitization efforts and are facing issues similar to those of CSPs in the early days of their digital transformation efforts.”
Gandhi added: “Given the evolving situation, it is more critical than ever for wireless networks to function reliably and support the connectivity requirements across the board. BSS vendors are supporting existing 4G (and earlier generations) network services that currently drive the majority of their revenue.
“Going forward, while the wireless industry remains a priority for BSS vendors, they are also able to align BSS solutions to meet the needs of communications, financial services, healthcare, and media and entertainment companies, as well as government entities.”
BSS vendors can partner with CSPs to create immense growth prospects
- Pioneer new price plans and partner-based business models such as B2B, B2C, and B2B2X for 5G success.
- Introduce AI-driven BSS and customer experience solutions that help CSPs deliver differentiated 5G services.
- Leverage cloud-native principles and support flexible deployments (core and edge) to help operators monetize different features of the network and create new opportunities.
- Implement a robust 5G policy that can set performance characteristics, including quality of service (QoS) and latency. With 5G, the policy can control networks and services down to the device level to ensure the best customer experience while managing valuable network resources.
Attackers shifted tactics in Q2 2020, with a 570% increase in bit-and-piece DDoS attacks compared to the same period last year, according to Nexusguard.
Perpetrators used bit-and-piece attacks to launch various amplification and elaborate UDP-based attacks to flood target networks with traffic.
Analysts witnessed attacks using much smaller sizes—more than 51% of bit-and-piece attacks were smaller than 30Mbps—to force communications service providers (CSPs) to subject entire networks of traffic to risk mitigation. This causes significant challenges for CSPs and typical threshold-based detection, which is unreliable for pinpointing the specific attacks to apply the correct mitigation.
Improvements in resources and technology will cause botnets to become more sophisticated, helping them increase resilience and evade detection efforts to gain command and control of target systems. The evolution of attacks means CSPs need to detect and identify smaller and more complex attack traffic patterns amongst large volumes of legitimate traffic.
Switching to deep learning-based predictive models recommended
Analysts recommend service providers switch to deep learning-based predictive models in order to quickly identify malicious patterns and surgically mitigate them before any lasting damage occurs.
“Cyber attackers have rewritten their battlefield playbooks and craftily optimized their resources so that they can sustain longer, more persistent attacks. Companies must look to deep learning in their approaches if they hope to match the sophistication and complexity needed to effectively stop these advanced threats.”
In the past, attackers have used bit-and-piece attacks with a single attack vector to launch new attacks based on that vector. There was a tendency to employ a blend of offensive measures in order to launch a wider range of attacks, intended to increase the level of difficulty for CSPs to detect and differentiate between malicious and legitimate traffic.