Operator‑billed 5G connections revenue to reach $357 billion by 2025

Operator‑billed revenue from 5G connections will reach $357 billion by 2025, rising from $5 billion in 2020, its first full year of commercial service, according to Juniper Research.

5G connections revenue

By 2025, 5G revenue is anticipated to represent 44% of global operator‑billed revenue owing to rapid migration of 4G mobile subscribers to 5G networks and new business use cases enabled by 5G technology.

However, the study identified 5G networks roll-outs as highly resilient to the COVID-19 pandemic. It found that supply chain disruptions caused by the initial pandemic period have been mitigated through modified physical roll-out procedures, in order to maintain the momentum of hardware deployments.

5G connections to generate 250% more revenue than average cellular connection

The study found that 5G uptake had surpassed initial expectations, predicting total 5G connections will surpass 1.5 billion by 2025. It also forecast that the average 5G connection will generate 250% more revenue than an average cellular connection by 2025.

To secure a return on investment into new services, such as uRLLC (Ultra-Reliable Low-Latency Communication) and network slicing, enabled by 5G, operators will apply this premium pricing for 5G connections.

However, these services alongside the high-bandwidth capabilities of 5G will create data-intensive use cases that lead to a 270% growth in data traffic generated by all cellular connections over the next five years.

Networks must increase virtualisation to handle 5G data traffic

Operators must use future launches of standalone 5G network as an opportunity to further increase virtualisation in core networks. Failure to develop 5G network architectures that handle increasing traffic will lead to reduced network functionality, inevitably leading to a diminished value proposition of its 5G network amongst end users.

Research author Sam Barker remarked: “Operators will compete on 5G capabilities, in terms of bandwidth and latency. A lesser 5G offering will lead to user churn to competing networks and missed opportunities in operators’ fastest-growing revenue stream.”

New research shows risk in healthcare supply chain

Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.

healthcare supply chain

New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.

But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.

Key findings

The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.

Highest rate

Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.

Size matters

Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.

Sub sectors vary

Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.

Cloud deployment impacts

As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.

It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.

The healthcare supply chain is at risk

It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.

When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.

“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.

“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”

Only 44% of healthcare providers conform to protocols outlined by the NIST CSF

Only 44% of healthcare providers, including hospital and health systems, conformed to protocols outlined by the NIST CSF – with scores in some cases trending backwards since 2017, CynergisTek reveals.

healthcare NIST CSF

Healthcare providers and NIST CSF

Analysts examined nearly 300 assessments of provider facilities across the continuum, including hospitals, physician practices, ACOs and Business Associates.

The report also found that healthcare supply chain security is one of the lowest ranked areas for NIST CSF conformance. This is a critical weakness, given that COVID-19 demonstrated just how broken the healthcare supply chain really is with providers buying PPE from unvetted suppliers.

“We found healthcare organizations continue to enhance and improve their programs year-over-year. The problem is they are not investing fast enough relative to an innovative and well-resourced adversary,” said Caleb Barlow, CEO of CynergisTek.

“These issues, combined with the rapid onset of remote work, accelerated deployment of telemedicine and impending openness of EHRs and interoperability, have set us on a path where investments need to be made now to shore up America’s health system.

“However, the report isn’t all doom and gloom. Organizations that have invested in their programs and had regular risk assessments, devised a plan, addressed prioritized issues stemming from the assessments and leveraged proven strategies like hiring the right staff and evidence-based tools have seen significant improvements to their NIST CSF conformance scores.”

Bigger budgets don’t mean better security performance

The report revealed bigger healthcare institutions with bigger budgets didn’t necessarily perform better when it comes to security, and in some cases, performed worse than smaller organizations or those that invested less.

In some cases, this was a direct result of consolidation where systems directly connect to newly-acquired hospitals without first shoring up their security posture and conducting a compromise assessment.

“What our report has uncovered over recent years is that healthcare is still behind the curve on security. While healthcare’s focus on information security has increased over the last 15 years, investment is still lagging. In the age of remote working and an attack surface that has exponentially grown, simply maintaining a security status quo won’t cut it,” said David Finn, EVP of Strategic Innovation at CynergisTek.

“The good news is that issues emerging in our assessments are largely addressable. The bad news is that it is going to require investment in an industry still struggling with financial losses from COVID-19.”

Leading factors influencing performance include poor security planning and lack of organizational focus, inadequate reporting structures and funding, confusion around priorities, lack of staff and no clear plan.

Key strategies to bolster healthcare security and achieve success

Look under the hood at security and privacy amid mergers and acquisitions: For health systems planning to integrate new organizations into the fold through mergers and acquisitions, leadership should look under the hood and be more diligent when examining the organization’s security and privacy infrastructure, measures and performance.

It’s important to understand their books and revenue streams as well as their potential security risks and gaps to prevent these issues from becoming liabilities.

Make security an enterprise priority: While other sectors like finance and aerospace have treated security as an enterprise-level priority, healthcare must also make this kind of commitment.

Understanding how these risks tie to the bigger picture will help an organization that thinks it cannot afford to invest in privacy and information security risk management activities understand why making such an investment is crucial.

Hospitals and healthcare organizations should create collaborative, cross-functional task forces like enterprise response teams, which offer other business units an eye-opening look into how security and privacy touch all parts of the business including financial, HR, and more.

Money isn’t a solution: Just throwing money at a problem doesn’t work. Security leaders need to identify priorities and have a plan which leverages talent, tried and true strategies like multi-factor authentication, privileged access management and on-going staff training to truly up level their defenses and take a more holistic approach, especially when bringing on new services such as telehealth.

Accelerate the move to cloud: While healthcare has traditionally been slow to adopt the cloud, these solutions provide the agility and scalability that can help leaders cope with situations like COVID-19, and other crises more effectively.

Shore up security posture: We frequently learn the hard way that security can disrupt workflow. COVID-19 taught us that workflow can also disrupt security and things are going to get worse before getting better. Get an assessment quickly to determine immediate needs and coming up with a game plan to bolster defenses needed in this next normal.

Companies continue to expose unsafe network services to the internet

33% of companies within the digital supply chain expose common network services such as data storage, remote access and network administration to the internet, according to RiskRecon. In addition, organizations that expose unsafe services to the internet also exhibit more critical security findings.

expose unsafe network services

The research is based on an assessment of millions of internet-facing systems across approximately 40,000 commercial and public institutions. The data was analyzed in two strategic ways: the direct proportion of internet-facing hosts running unsafe services, as well as the percentage of companies that expose unsafe services somewhere across their infrastructure.

The research concludes that the impact is further heightened when vendors and business partners run unsafe, exposed services used by their digital supply chain customers.

“Blocking internet access to unsafe network services is one of the most basic security hygiene practices. The fact that one-third of companies in the digital supply chain are failing at one of the most basic cybersecurity practices should serve as a wake up call to executives third-party risk management teams,” said Kelly White, CEO, RiskRecon.

“We have a long way to go in hardening the infrastructure that we all depend on to safely operate our businesses and protect consumer data. Risk managers will be well served to leverage objective data to better understand and act on their third-party risk.”

Expose unsafe network services: Key findings

  • 33% of organizations expose one or more unsafe services across hosts under their control. As such, admins should either eliminate direct internet access or deploy compensating controls for when/if such services are required.
  • Direct internet access to database services should be prohibited or secured. Within the top three unsafe network services, datastores, such as S3 buckets and MySQL databases are the most commonly exposed.
  • Digital transformation and the shift to remote work needs to be considered. Remote access is the second most commonly exposed service; admins should consider restricting the accessibility of these services only to authorized and internal users.
  • Universities are woefully exposed. With a culture that boasts open access to information and collaboration, the education sector has the greatest tendency to expose unsafe network services on non-student systems, with 51.9% of universities running unsafe services.
  • Global regions lack proper security posture. Countries such as the Ukraine, Indonesia, Bulgaria, Mexico and Poland confirm the highest rate of domestically-hosted systems running unsafe services.
  • Beware of ElasticSearch and MongoDB. Firms that expose these services to the internet have a 4x to 5x higher rate of severe security findings than those who do not run on internet-facing hosts.
  • Unsafe services uncover other security issues. Failing to patch software and implement web encryption are two of the most prevalent security findings associated with unsafe services.

expose unsafe network services

“This research should be welcome news to organizations struggling under the pressure to conduct exhaustive and time-consuming security assessments of their external business partners,” said Jay Jacobs, partner, Cyentia Institute.

“Similar to how medical doctors diagnose illnesses through various outward signs exhibited by their patients, third-party risk programs can perform quick, reliable diagnostics to identify underlying cybersecurity ailments.

“Not only is the presence of unsafe network services a problem in itself, but the data we examine in this report also shows that they’re a symptom of broader problems. Easy, reliable risk like this offer a rare quick win for risk assessments.”

Surge in cyber attacks targeting open source software projects

There has been a massive 430% surge in next generation cyber attacks aimed at actively infiltrating open source software supply chains, Sonatype has found.

attacks open source

Rise of next-gen software supply chain attacks

According to the report, 929 next generation software supply chain attacks were recorded from July 2019 through May 2020. By comparison 216 such attacks were recorded in the four years between February 2015 and June 2019.

The difference between “next generation” and “legacy” software supply chain attacks is simple but important: next generation attacks like ​Octopus Scanner​ and ​electron-native-notify​ are strategic and involve bad actors intentionally targeting and surreptitiously compromising “upstream” open source projects so they can subsequently exploit vulnerabilities when they inevitably flow “downstream” into the wild.

Conversely, legacy software supply chain attacks like ​Equifax​ are tactical and involve bad actors waiting for new zero day vulnerabilities to be publicly disclosed and then racing to take advantage in the wild before others can remediate.

“Following the notorious Equifax breach of 2017, enterprises significantly ramped investments to prevent similar attacks on open source software supply chains,” said Wayne Jackson, CEO at Sonatype.

“Our research shows that commercial engineering teams are getting faster in their ability to respond to new zero day vulnerabilities. Therefore, it should come as no surprise that next generation supply chain attacks have increased 430% as adversaries are shifting their activities ‘upstream’ where they can infect a single open source component that has the potential to be distributed ‘downstream” where it can be strategically and covertly exploited.”

Speed remains critical when responding to legacy software supply chain attacks

According to the report, enterprise software development teams differ in their response times to vulnerabilities in open source software components:

  • 47% of organizations ​became aware of new open source vulnerabilities after a week, and
  • 51% of organizations​ took more than a week to remediate the open source vulnerabilities

The researchers discovered that not all organizations prioritize improved risk management practices at the expense of developer productivity. This year’s report reveals that high performing development teams are ​26x faster at detecting and remediating open source vulnerabilities,​ and ​deploy changes to code 15x more frequently​ than their peers.

High performers are also:

  • 59% more likely​ to be using automated software composition analysis (SCA) to detect and remediate known vulnerable OSS components across the SDLC
  • 51% more likely​ to centrally maintain a software bill of materials (SBOMs) for applications
  • 4.9x more likely​ to successfully update dependencies and fix vulnerabilities without breakage
  • 33x more likely​ to be confident that OSS dependencies are secure (i.e., no known vulnerabilities)

Additional findings

  • 1.5 trillion component download requests​ projected in 2020 across all major open source ecosystems
  • 10% of java OSS component downloads ​by developers​ ​had known security vulnerabilities
  • 11% of open source components​ developers build into their applications are known vulnerable, with 38 vulnerabilities discovered on average
  • 40% of npm packages ​contain dependencies with known vulnerabilities​
  • New open source zero-day vulnerabilities are exploited in the wild within​ 3 days of public disclosure​
  • The average enterprise sources code from 3,500 OSS projects including over 11,000 component releases.

attacks open source

“We found that high performers are able to simultaneously achieve security and productivity objectives,” said Gene Kim, DevOps researcher and author of The Unicorn Project. “It’s fantastic to gain a better understanding of the principles and practices of how this is achieved, as well as their measurable outcomes.”

“It was really exciting to find so much evidence that this much-discussed tradeoff between security and productivity is really a false dichotomy. With the right culture, workflow, and tools development teams can achieve great security and compliance outcomes together with class-leading productivity,” said Dr. Stephen Magill, Principal Scientist at Galois & CEO of MuseDev.

Patented algorithms predict, identify, diagnose and prevent abnormalities in complex systems

The COVID-19 pandemic has forced public health, supply chain, transportation, government, economic and many other entities to interact in real time. One of the challenges in large systems interacting in this way is that even tiny errors in one system can cause devastating effects across the entire system chain.

algorithms prevent abnormalities

Now, Purdue University innovators have come up with a possible solution: a set of patented algorithms that predict, identify, diagnose and prevent abnormalities in large and complex systems.

“It has been proven again and again that large and complex systems can and will fail and cause catastrophic impact,” said Shimon Y. Nof, a Purdue professor of industrial engineering and director of Purdue’s PRISM Center.

“Our technology digests the large amount of data within and across systems and determines the sequence of resolving interconnected issues to minimize damage, prevent the maximum number of errors and conflicts from occurring, and achieve system objectives through interaction with decision makers and experts.”

Applying systems science and data science to solve problems

Nof said this technology would be helpful for smart grids, healthcare systems, supply chains, transportation systems and other distributed systems that deal with ubiquitous abnormalities and exceptions, and are vulnerable to cascading or large amount of failures.

This technology integrates constraint modeling, network science, adaptive algorithms and state-of-the-art decision support systems.

“Our algorithms and solution apply systems science and data science to solve problems that encompass time, space and disciplines, which is the core of industrial engineering,” said Xin Chen, a former graduate student in Nof’s lab who helped create the technology.

Nof said the novelty of the technology lies in three main areas. First, analytical and data mining tools extract underlying network structures of a complex system and determine its unique features. A robust set of algorithms then are analyzed based on the objectives for system performance, structures and features of fault networks in the system.

Finally, algorithms with specific characteristics are applied to manage errors and conflicts to achieve desired system performance.

Third-party risk is broken, businesses unprepared for supply chain disruptions

Many companies are not dedicating proper resources to assess third-party risks, and those that are still lack confidence in their programs, according to Prevalent.

supply chain disruptions

Supply chain disruptions

As a result, there are real consequences including loss of revenue, loss of productivity, and loss of reputation – all of which can jeopardize resiliency and are amplified given today’s supply chain concerns related to COVID-19.

“Organizations are starting to ask the question about what happens to them if their supply chain partners go out of business. Sadly, most companies don’t have the risk visibility into their supply chains to answer that question,” stated Brenda Ferraro, VP of third-party risk at Prevalent.

“How can they expect to adequately manage their own risk without understanding the risks vendors and partners pose?”

Key findings from the report

  • Lack of confidence in the program inhibits results: 54% of organizations have some meaningful experience in conducting third-party risk assessments, yet only 10% are extremely confident in their programs.
  • Significant consequences: 76% of respondents said that they experienced one or more issues that impacted vendor performance – resulting in a loss of productivity (39%), monetary damages (28%) and a loss of reputation (25%).
  • Unsatisfactory number of assessments: 66% of respondents say they should be assessing more than three-fourths of their top tier vendors but aren’t doing so.
  • Costs, resources and lack of process are inhibitors to success: Lack of resources (74%), cost (39%) and insufficient processes (32%) are keeping respondents from assessing all their top-tier vendors.
  • No one seems happy with their existing toolset: Satisfaction levels among existing tools hovers in the 50% range, and weighted average of satisfaction caps out at 3.8/5.0. GRC tools have an especially long way to go with a 41% satisfaction rate.

supply chain disruptions

Third-party risk management program

Growing and maturing an adaptable and agile third-party risk management program that is resilient in times of crisis doesn’t have to be a complex and time-consuming process. The report concludes with five recommendations to jump start vendor risk activities:

  • Develop a programmatic process
  • Build a cross-functional team that extends beyond risk and compliance
  • Be comprehensive without being complex
  • Maintain options for assessment collection and analysis for agility
  • Complement your decision-making with risk-based intelligence

The cybersecurity implications of working remotely

We sat down with Demi Ben-Ari, CTO at Panorays, to discuss the cybersecurity risks of remote work facilitated by virtual environments.

cybersecurity working remotely

The global spread of the COVID-19 coronavirus has had a notable impact on workplaces worldwide, and many organizations are encouraging employees to work from home. What are the cybersecurity implications of this shift?

Having a sizable amount of employees suddenly working remotely can be a major change for organizations and presents numerous problems with regard to cybersecurity.

One issue involves a lack of authentication and authorization. Because people are not seeing each other face-to-face, there is an increased need for two-factor authentication, monitoring access controls and creating strong passwords. There’s also a risk of increased attacks like phishing and malware, especially since employees will now likely receive an unprecedented amount of emails and online requests.

Moreover, remote working can effectively widen an organization’s attack surface. This is because employees who use their own devices for work can introduce new platforms and operating systems that require their own dedicated support and security. With so many devices being used, it’s likely that at least some will fall through the security cracks.

Finally, these same security considerations apply to an organization’s supply chain. This can be challenging, because often smaller companies lack the necessary know-how and human resources to implement necessary security measures. Hackers are aware of this and can start targeting third-party suppliers with the goal of penetrating upstream partners.

What are the hidden implications of human error?

With less effective communication, organizations are unquestionably more prone to human error. When you’re not sitting next to the person you work with, the chances of making configuration mistakes that will expose security gaps are much higher. These cyber gaps can then be exploited by malicious actors.

IT departments are especially prone to error because they are changing routine and must open internal systems to do external work. For example, because of the shift to a remote workplace, IT teams may have to introduce network and VPN configurations, new devices, ports and IT addresses. Such changes effectively result in a larger attack surface and create the possibility that something may be set up incorrectly when implementing these changes.

The fact that people are not working face-to-face exacerbates the situation: Because it’s harder to confirm someone’s identity, there’s more room for error.

What are the potential compliance implications of this huge increase in mobile working?

There’s greater risk, because employees are not on the organization’s network and the organization is not fully in control of their devices. Essentially, the organization has lost the security of being in a physical protected area. As a result, organizations also open themselves up to greater risk of not adequately complying with regulations that demand a certain level of cybersecurity.

Another compliance issue is related to change. For example, an organization may be certified for SOC2, but those controls may not remain in place with people working from home. Thus a major, sudden change like a mass remote workforce can unintentionally lead to noncompliance.

How can organizations efficiently evaluate new vendors, eliminate security gaps and continuously monitor their cyber posture?

As part of their third-party security strategy, organizations should take the following steps:

1. Map all vendors along with their relationship to the organization, including the type of data they access and process. For example, some vendors store and process sensitive data, while others might have access to update software code on the production environment.

2. Prioritize vendors’ criticality. Some vendors are considered more critical than others in terms of the business impact they pose, the technology relationship with an organization or even regulatory aspects. For example, a certain supplier might be processing all employee financial information while another supplier might be a graphic designer agency that runs posters of a marketing event.

3. Gain visibility and control over vendors. This can be accomplished by using a solution to thoroughly assess vendors, preferably with a combination of scanning the vendor’s attack service along with completion of security questionnaires. With the shift to remote working, organizations should also be sure to include questions that assess vendors’ preparedness for working at home.

4. Continuously monitor vendors’ security posture. Visibility and control require a scalable solution for the hundreds or even thousands of suppliers that organizations typically engage with these days. Organizations should ensure that their solution alerts of any changes in cyber posture and that they respond accordingly. For example, organizations may decide to limit access, or even completely close connections between the supplier and the organization’s environment.

Can 5G make you more vulnerable to cyberattacks?

Many enterprises and sectors are unaware of the 5G security vulnerabilities that exist today. Choice IoT says it’s critical for businesses to have a plan for discovering and overcoming them at the outset of a 5G/IoT platform rollout to avoid future cybersecurity disasters.

5G security vulnerabilities

There is a big difference between the promise of 5G low latency, higher bandwidth, and speed for businesses versus the security of 5G. While many are excited about Gartner’s prediction of $4.2 billion being invested in global 5G wireless network infrastructure in 2020, few discuss the business costs of its unheralded security holes.

That’s an ongoing conversation that 5G and IoT solutions experts like Choice IOT’s CEO Darren Sadana are having with enterprises with 5G plans on the drawing board. “Businesses will need a strategy for overcoming 5G’s inherited security flaws from 4G or face major losses and privacy catastrophes.”

5G is poised to drive IoT, industrial IoT (IIoT), cloud services, network virtualization, and edge computing, which multiplies the endpoint security complications. Although the manufacturing sector cites IIoT security as the top priority, the combination of 5G security vulnerabilities may come back to haunt them.

Pinpointing 5G security vulnerabilities

According to an Accenture study of more than 2,600 business and technology decision makers across 12 industry sectors in Europe, North America and Asia-Pacific, 62% fear 5G will make them more vulnerable to cyberattacks. At the root of the problem is the reality that many of the security problems stem from the software-defined, virtualized nature of 5G versus the hardware foundations of earlier LTE mobile communication standards.

It’s central role in IoT is a strength and a weakness where endpoints are highly localized and beyond the network edge. The 5G network promises of device authentication, device encryption, device ID, and credentialing are positives, but the flip side is that many of those pluses also carry security dangers.

The nature of how signals and data are routed in 5G/IoT networks can lead to Mobile Network mapping (MNmap), where attackers can create maps of devices connected to a network, identify each device and link it to a specific person. Then there are Man-in-the-middle (MiTM) attacks that enable attackers to hijack the device information before security is applied.

There are also supply chain security challenges with platform components bought from overseas that harbor inherent security flaws. This can be seen in the backdoor vulnerabilities alleged to be purposely built into mobile carrier networks supplied with equipment from Chinese equipment giant Huawei.

The back doors would allow malicious actors to get target location, eavesdrop on calls, and enable the potential for ransomware injection into a 5G network targeting a mobile carrier.

Other vulnerabilities covered across the wireless and IoT sectors include SIM Jacking, Authenticated Key Exchange protocols (AKA) and a host of base station backdoor vulnerabilities.

IoT for everything from smart homes, medical devices and machine to machine (M2M) operation to smart cities/power grids and autonomous vehicles are threat targets. They all give attackers multiple ways to manipulate interconnected IoT devices communicating data via 5G networks.

DDoS attacks, the ability to take control of video surveillance systems and medical devices, and more are all possible due to this broader attack surface and inherent 5G vulnerabilities.

Plugging the holes

The picture doesn’t have to be a bleak one for businesses and enterprises that want to maximize the benefits of 5G while eliminating its vulnerabilities across sectors like healthcare, utilities, finance, automotive, communication and many others.

A U.S. Senator, recently called on the FCC to require wireless carriers rolling out 5G networks to develop cybersecurity standards. Sadana and other experts make it clear that assessment, discovery, and planning are key. They form the foundation for 5G/IoT platform buildout vulnerability identification and system modifications that encompass IT/OT and wireless connectivity.

Sadana points to the NIST National Cybersecurity Center of Excellence (NCCoE), which is developing a NIST Cybersecurity Practice Guide. This will demonstrate how the components of 5G architectures can be used securely to mitigate risks and meet industry sectors’ compliance requirements across use case scenarios.

“While this goes a long way to providing a standardized practices roadmap for companies in creating 5G platforms that are secure, it’s only a start,” explained Sadana. “5G is still the wild west with things changing every day, so businesses need IoT/IT security expert partners that can help them plan from the ground up.”

Tiny cryptographic ID chip can help combat hardware counterfeiting

To combat supply chain counterfeiting, which can cost companies billions of dollars annually, MIT researchers have invented a cryptographic ID tag that’s small enough to fit on virtually any product and verify its authenticity.

supply chain counterfeiting

A 2018 report from the Organization for Economic Co-operation and Development estimates about $2 trillion worth of counterfeit goods will be sold worldwide in 2020. That’s bad news for consumers and companies that order parts from different sources worldwide to build products. Counterfeiters tend to use complex routes that include many checkpoints, making it challenging to verifying their origins and authenticity. Consequently, companies can end up with imitation parts.

Wireless ID tags are becoming increasingly popular for authenticating assets as they change hands at each checkpoint. But these tags come with various size, cost, energy, and security tradeoffs that limit their potential.

Popular radio-frequency identification (RFID) tags, for instance, are too large to fit on tiny objects such as medical and industrial components, automotive parts, or silicon chips. RFID tags also contain no tough security measures.

Some tags are built with encryption schemes to protect against cloning and ward off hackers, but they’re large and power hungry. Shrinking the tags means giving up both the antenna package — which enables radio-frequency communication — and the ability to run strong encryption.

In a paper the researchers presented at the IEEE International Solid-State Circuits Conference, they describe an ID chip that navigates all those tradeoffs. It’s millimeter-sized and runs on relatively low levels of power supplied by photovoltaic diodes.

It also transmits data at far ranges, using a power-free “backscatter” technique that operates at a frequency hundreds of times higher than RFIDs. Algorithm optimization techniques also enable the chip to run a popular cryptography scheme that guarantees secure communications using extremely low energy.

“We call it the ‘tag of everything.’ And everything should mean everything,” says co-author Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science and head of the Terahertz Integrated Electronics Group in the Microsystems Technology Laboratories (MTL).

“If I want to track the logistics of, say, a single bolt or tooth implant or silicon chip, current RFID tags don’t enable that. We built a low-cost, tiny chip without packaging, batteries, or other external components, that stores and transmits sensitive data.”

Joining Han on the paper are: graduate students Mohamed I. Ibrahim, Muhammad Ibrahim Wasiq Khan, and Chiraag S. Juvekar; former postdoc associate Wanyeong Jung; former postdoc Rabia Tugce Yazicigil; and Anantha P. Chandrakasan, who is the dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Solving the problem of size

The work began as a means of creating better RFID tags. The team wanted to do away with packaging, which makes the tags bulky and increases manufacturing cost.

They also wanted communication in the high terahertz frequency between microwave and infrared radiation — around 100 gigahertz and 10 terahertz — that enables chip integration of an antenna array and wireless communications at greater reader distances.

Finally, they wanted cryptographic protocols because RFID tags can be scanned by essentially any reader and transmit their data indiscriminately.

But including all those functions would normally require building a fairly large chip. Instead, the researchers came up with “a pretty big system integration,” Ibrahim says, that enabled putting everything on a monolithic — meaning, not layered — silicon chip that was only about 1.6 square millimeters.

One innovation is an array of small antennas that transmit data back and forth via backscattering between the tag and reader. Backscatter, used commonly in RFID technologies, happens when a tag reflects an input signal back to a reader with slight modulations that correspond to data transmitted.

In the researchers’ system, the antennas use some signal splitting and mixing techniques to backscatter signals in the terahertz range. Those signals first connect with the reader and then send data for encryption.

Implemented into the antenna array is a “beam steering” function, where the antennas focus signals toward a reader, making them more efficient, increasing signal strength and range, and reducing interference. This is the first demonstration of beam steering by a backscattering tag, according to the researchers.

Tiny holes in the antennas allow light from the reader to pass through to photodiodes underneath that convert the light into about 1 volt of electricity. That powers up the chip’s processor, which runs the chip’s “elliptic-curve-cryptography” (ECC) scheme.

ECC uses a combination of private keys (known only to a user) and public keys (disseminated widely) to keep communications private. In the researchers’ system, the tag uses a private key and a reader’s public key to identify itself only to valid readers. That means any eavesdropper who doesn’t possess the reader’s private key should not be able to identify which tag is part of the protocol by monitoring just the wireless link.

Optimizing the cryptographic code and hardware lets the scheme run on an energy-efficient and small processor, Yazicigil says. “It’s always a tradeoff,” she says. “If you tolerate a higher-power budget and larger size, you can include cryptography. But the challenge is having security in such a small tag with a low-power budget.”

Pushing the signal range limits

Currently, the signal range sits around 5 centimeters, which is considered a far-field range — and allows for convenient use of a portable tag scanner. Next, the researchers hope to “push the limits” of the range even further, Ibrahim says.

Eventually, they’d like many of the tags to ping one reader positioned somewhere far away in, say, a receiving room at a supply chain checkpoint. Many assets could then be verified rapidly.

“We think we can have a reader as a central hub that doesn’t have to come close to the tag, and all these chips can beam steer their signals to talk to that one reader,” Ibrahim says.

The researchers also hope to fully power the chip through the terahertz signals themselves, eliminating any need for photodiodes.

The chips are so small, easy to make, and inexpensive that they can also be embedded into larger silicon computer chips, which are especially popular targets for counterfeiting.

Seven cybersecurity and privacy forecasts for 2020

The developments in the area of cybersecurity are alarming. As the number of smart devices in private households increases, so do the opportunities for cyber criminals to attack, TÜV Rheinland reveals.

cybersecurity forecasts 2020

Uncontrolled access to personal data undermines confidence in the digital society. The logistics industry and private vehicles are increasingly being targeted by hackers.

“From our point of view, it is particularly serious that cybercrime is increasingly affecting our personal security and the stability of society as a whole,” explains Petr Láhner, Business Executive Vice President for the business stream Industry Service & Cybersecurity at TÜV Rheinland.

“One of the reasons for this is that digital systems are finding their way into more and more areas of our daily lives. Digitalization offers many advantages – but it is important that these systems and thus the people are safe from attacks.”

Uncontrolled access to personal data could destabilize the digital society

In 2017, Frenchwoman Judith Duportail asked a dating app company to send her any personal information they had about her. In response, she received an 800-page document containing her Facebook likes and dislikes, the age of the men she had expressed interest in, and every single online conversation she had had with all 870 matching contacts since 2013.

The fact that Judith Duportail received so much personal data after several years of using a single app underscores the fact that data protection is now very challenging. In addition, this example shows how little transparency there is about securing and processing data that can be used to gain an accurate picture of an individual’s interests and behavior.

Smart consumer devices are spreading faster than they can be secured

Smart speakers, fitness trackers, smart watches, thermostats, energy meters, smart home security cameras, smart locks and lights are the best-known examples of the seemingly unstoppable democratization of the “Internet of many Things”.

Smart devices are no longer just toys or technological innovations. The number and performance of individual “smart” devices is increasing every year, as these types of devices are quickly becoming an integral part of everyday life.

It is easy to see a future in which the economy and society will become dependent on them, making them a very attractive target for cyber criminals. Until now, the challenge for cybersecurity has been to protect one billion servers and PCs. With the proliferation of smart devices, the attack surface could quickly increase hundreds or thousands of times.

The trend towards owning a medical device increases the risk of an internet health crisis

Over the past ten years, personal medical devices such as insulin pumps, heart and glucose monitors, defibrillators and pacemakers have been connected to the internet as part of the Internet of Medical Things (IoMT). At the same time, researchers have identified a growing number of software vulnerabilities and demonstrated the feasibility of attacks on these products. This can lead to targeted attacks on both individuals and entire product classes.

In some cases, the health information generated by the devices can also be intercepted. So far, the healthcare industry has struggled to respond to the problem – especially when the official life of the equipment has expired.

As with so many IoT devices of this generation, networking was more important than the need for cybersecurity. The complex task of maintaining and repairing equipment is badly organized, inadequate or completely absent.

Vehicles and transport infrastructure are new targets for cyberattacks

Through the development of software and hardware platforms, vehicles and transport infrastructure are increasingly connected. These applications offer drivers more flexibility and functionality, potentially more road safety, and seem inevitable given the development of self-propelled vehicles.

The disadvantage is the increasing number of vulnerabilities that attackers could exploit – some with direct security implications. Broad cyberattacks targeting transport could affect not only the safety of individual road users, but could also lead to widespread disruption of traffic and urban safety.

Hackers target smart supply chains and make them “dumb”

With the goal of greater efficiency and lower costs, smart supply chains leverage IoT automation, robotics and big data management – those within a company and with their suppliers.

Smart supply chains increasingly represent virtual warehousing, where the warehouse is no longer just a physical building, but any place where a product or its components can be located at any time. Nevertheless, there is a growing realization that this business model considerably increases the financial risks, even with only relatively minor disruptions.

Smart supply chains are dynamic and efficient, but are also prone to disruptions in processes. Cyberattacks can manipulate information about deposits. Thus, components would not be where they are supposed to be.

Threats to shipping are no longer just a theoretical threat but a reality

In 2017, goods with an estimated weight of around 10.7 billion tons were transported by sea. Despite current geopolitical and trade tensions, trade is generally expected to continue to grow. There is ample evidence that states are experimenting with direct attacks on ship navigation systems.

At the same time, attacks on the computer networks of ships used to extort ransom have been reported. Port logistics offers a second, overlapping area of vulnerability.

Many aspects to shipping that can be vulnerable to attack such as ship navigation, port logistics and ship computer network. Attacks can originate from states and activist groups. This makes monitoring and understanding a key factor in modern maritime cybersecurity.

Vulnerabilities in real-time operating systems could herald the end of the patch age

It is estimated that by 2025 there will be over 75 billion networked devices on the IoT, each using its own software package. This, in turn, contains many outsourced and potentially endangered components.

In 2019, Armis Labs discovered eleven serious vulnerabilities (called Urgent/11) in the real-time operating system (RTOS) Wind River VxWorks. Six of these flaws exposed an estimated 200 million IoT devices to the risk of remote code execution (RCE) attacks.

This level of weakness is a major challenge as it is often deeply hidden in a large number of products. Organizations may not even notice that these vulnerabilities exist. In view of this, the procedure of always installing the latest security updates will no longer be effective.

Benefits of blockchain pilot programs for risk management planning

Through 2022, 80% of supply chain blockchain initiatives will remain at a proof-of-concept (POC) or pilot stage, according to Gartner.

blockchain pilot programs

One of the main reasons for this development is that early blockchain pilots for supply chain pursued technology-oriented models that have been successful in other sectors, such as banking and insurance. However, successful blockchain use cases for supply chain require a different approach.

“Modern supply chains are very complex and require digital connectivity and agility across participants,” said Andrew Stevens, senior director analyst with the Gartner Supply Chain practice.

“Many organizations believed that blockchain could help navigate this complexity and pushed to create robust use cases for the supply chain. However, most of these use cases were inspired by pilots from the banking and insurance sector and didn’t work well in a supply chain environment.”

This setback should not discourage supply chain leaders from experimenting with blockchain. Blockchain use cases simply require a different approach for supply chain than for other sectors.

From technology-first to technology roadmaps

Adopting a technology-first approach that exclusively targets blockchain infrastructure was the initial idea for use cases in supply chain, mirroring the approach of the banking and insurance sector.

However, this approach did not work, because in contrast to the highly digital-only fintech blockchain use cases, many supply chain use cases will need to capture events and data across physical products, packaging layers and transportation assets.

Additionally, supply chain leaders need to understand how these events can be digitalized for sharing across a potential blockchain-enabled ecosystem of stakeholders.

“Today, supply chain leaders have now started to treat blockchain as part of a longer-term technology roadmap and risk management planning. We see that many leaders are adopting a broader end-to-end view across their supply chains and map all requirements – from sourcing across manufacturing to the final distribution,” Mr. Stevens added.

“Having blockchain as part of an overall technology portfolio has created opportunities for internal collaboration across many areas that have a potential interest in blockchain, such as logistics and IT.”

Blockchain pilot programs

Though most blockchain initiatives didn’t survive past the pilot phase, they have provided fresh stimuli for supply chain leaders to conduct broader supply chain process and technology reviews.

“Many supply chain leaders that have conducted blockchain initiatives found that they now have a more complete overview of the current health of their supply chain. Their perception on how blockchain can be used in the supply chain also has shifted,” Mr. Stevens said.

“By going through the process of deploying blockchain pilot programs, they discovered what needs to change in their organization before blockchain technology can be leveraged effectively.”

Before starting another initiative, supply chain leaders should identify and establish key criteria and technology options for measuring and capturing metrics and data that can indicate an organization’s readiness to explore blockchain.

“In a way, blockchain is a collaboration agent. It forces an organization to continually assess on a broad scale if its structure and employees are ready to embrace this new technology,” Mr. Stevens concluded.

Supply chain examination: Planning for vulnerabilities you can’t control

Seemingly, there are numerous occurrences when the customer’s personally identifiable information stored by an organization’s third-party provider is set loose by malicious intentioned actors. Threats take on many different shapes and sizes and aren’t someone else’s problem or responsibility to control or mitigate.

supply chain examination

Data breaches are not only caused by elusive thugs outside of the firewalled perimeter, but also from well-intended professionals inside the system. These individuals may not be security consultants but they’re a key part of the supply chain attack – a breach of information caused in a stand-alone moment that ripples through the rest of the supply chain unintentionally.

The supply chain starts with a request for service and ends with a fulfillment that includes all the moments of data-at-rest, data-in-transit and intersystem communication vital for service fulfillment. Every person and every asset have a responsibility to secure these multiple stages of the supply chain.

On an international scale, a larger conversation is taking place about how to secure data at all levels within the organization. Securing the supply chain is a pivotal process in understanding the complete threat landscape. Here are some common-sense ventures to evaluate and discern the varying degrees of supply chain security.

Supply chain examination

When providing services to your organization, it’s valuable to reach an understanding about what various partners are doing. There is the concept in some cybersecurity spaces of technological roll down. This means that whatever standard the primary company has the partner companies should also adhere to. There are legal and liability reasons why it is suggested to come to an agreement with your partners about how they treat your companies’ applications and data if not hosted internally, or even if it is hosted internally but administered by a vendor.

In one case study of how important it is to vet the vendor, an actor gained access to internally vulnerable systems through a non-mission critical ingress. Companies outsource things like telephone network administration or air conditioning monitoring or printer maintenance. You have probably seen these situations and can probably think of some examples where a trusted, industry recognized partner did not perform the same kind of hardening your company demands, leaving an ingress open to attack.

The first way you can combat this kind of vulnerability is to simply ask, in writing, how they plan to handle any concerns your organization has. The government supply chain, for example, includes an inquiry and design review process that must be followed regardless of whether it is a prime or subprime supplier. From a liability standpoint, this places the risk more on the partner than the consuming organization of the service but doesn’t absolve the customer of all risk.

Own all your data

Another strong way of eliminating risk in your enterprise is to own control of all your data. This can take many forms and does not necessarily mean that your organization needs to go back to the data center to build, administer and maintain infrastructure. It could mean building a hardened platform with built in controls that consider the big questions, including the following:

  • What happens if someone breaks into the cloud provider and steals your logical volumes from the host OS?
  • What happens if the cloud provider has a misconfiguration and an actor comes in through the defined ingress point to attempt to access your data?
  • What happens when a true disaster happens?

Each question can be answered relatively easily if you take these issues into consideration when building a platform, application or enterprise. Having a quality three-tier PKI solution and hardened identity and access management platform are two ways to help gain control of your data, even in the cloud. The use of hardened ingress points in conjunction with a quality IAM solution, that includes a two-factor authorization option, can eliminate unwanted egress of data by restricting where the data can flow to, or even be requested from. Going through a methodical process of encrypting valuable data at rest, in transit and at the application layer, can help ensure confidentiality.

Even in the unlikely event that your environment is comprised, the data is useless without the proper encryption keys.
The last part of this equation is to ensure whatever you do to protect your data can be quantified, measured and audited on a regular basis to allow for your customer and intellectual property to be both safe guarded safe from unwanted access.

Proper encryption

The overall goal of encryption is to protect data. The first step in the encryption process is knowing the types data that must be protected. While there are regulatory and compliance requirements mandating what type of encryption should be used, it is always good practice to protect any type of personally identifiable data, system inventory and any payment card, health care or government related data. By understanding the types of data a company needs to protect, security professionals can better identify the regulatory requirements that will be placed on the data in scope for encryption.

An effective approach to encryption is to apply it wherever sensitive data is being processed, stored or viewed. For example, in the cloud the user must access the cloud over the internet. It is the responsibility of the service provider to provide the end user with a secure platform to access data. Here, good encryption practice would entail encrypting the first initial connection as well the session and post session activity as well. By taking the approach of applying encryption at all levels of the supply chain, an organization is reducing its attack surface.

While it is the service provider’s job to provide the platform, it is the end-user’s responsibility to understand how their data is being kept and accessed. Encryption dos and don’ts are simple: Apply the principle of least privilege and restrict access to the encryption keys to individuals or on specialized hardware such as a Hardware Security Module. By not properly securing the Master encryption keys, an organization inadvertently open the door to human error and increased risk surface. This can easily be reinforced by instituting a centralized management service and keeping your keys in a separate environment from your data. By separating the keys from the data an organization can better guarantee the security of the environment.

Yet encryption is only as strong as the policies and procedures in place to support it. Working with operations teams enforce encryption standards and key management is a two-sided struggle. Take the time to educate your employees and conduct good end-user training to ensure that users know and understand their role in the data security process.

These steps help secure an organization’s data and provide a level of guarantee the provider is doing everything in their power to keep up with industry trends and security. This action then harnesses trust between the provider and users in a way to help drive business goals while meeting industry standards.

A supply chain that’s secure

By securing the supply chain and educating end-users, security organizations can increase security while also driving operational efficiency. By vetting the supply chain, organization’s gain a competitive edge by knowing how their data is processed, stored, and its overall usage. This drives efficiency by giving organizations a way to demonstrate their ability to merge security and operations while providing a viable secure solution that fulfills the company’s goals and requirements.

Contributing author: Thomas Smith, Senior Security Consultant in Vulnerability Management, Atos North America.

Cyber risk increases at all layers of the corporate network

Organizations will face a growing risk from their cloud and the supply chain, according to Trend Micro. Cyber risk increases at all levels The growing popularity of cloud and DevOps environments will continue to drive business agility while exposing organizations, from enterprises to manufacturers, to third-party risk. “As we enter a new decade, organizations of all industries and sizes will increasingly rely on third party software, open-source, and modern working practices to drive the digital … More

The post Cyber risk increases at all layers of the corporate network appeared first on Help Net Security.