Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.
New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.
But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.
The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.
Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.
Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.
Sub sectors vary
Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.
Cloud deployment impacts
As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.
It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.
The healthcare supply chain is at risk
It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.
When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.
“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.
“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”
Andrew Magnusson started his information security career 20 years ago and he decided to offer the knowledge he accumulated through this book, to help the reader eliminate security weaknesses and threats within their system.
As he points out in the introduction, bugs are everywhere, but there are actions and processes the reader can apply to eliminate or at least mitigate the associated risks.
The author starts off by explaining vulnerability management basics, the importance of knowing your network and the process of collecting and analyzing data.
He explains the importance of a vulnerability scanner and why it is essential to configure and deploy it correctly, since it gives valuable infromation to successfully complete a vulnerabilty management process.
The next step is to automate the processes, which prioritizes vulnerabilities and gives time to work on more severe issues, consequently boosting an organization’s security posture.
Finally, it is time to decide what to do with the vulnerabilities you have detected, which means choosing the appropriate security measures, whether it’s patching, mitigation or systemic measures. When the risk has a low impact, there’s also the option of accepting it, but this still needs to be documented and agreed upon.
The important part of this process, and perhaps also the hardest, is building relationships within the organization. The reader needs to respect office politics and make sure all the decisions and changes they make are approved by the superiors.
The second part of the book is practical, with the author guiding the reader through the process of building their own vulnerability management system with a detailed analysis of the open source tools they need to use such as Nmap, OpenVAS, and cve-search, everything supported by coding examples.
The reader will learn how to build an asset and vulnerability database and how to keep it accurate and up to date. This is especially important when generating reports, as those need to be based on recent vulnerability findings.
Who is it for?
Practical Vulnerability Management is aimed at security practitioners who are responsible for protecting their organization and tasked with boosting its security posture. It is assumed they are familiar with Linux and Python.
Despite the technical content, the book is an easy read and offers comprehensive solutions to keeping an organization secure and always prepared for possible attacks.
Senior risk and compliance professionals within financial services company’s lack confidence in the security data they are providing to regulators, according to Panaseer.
Results from a global external survey of over 200+ GRC leaders reveal concerns on data accuracy, request overload, resource-heavy processes and lack of end-to-end automation.
The results indicate a wider issue with cyber risk management. If GRC leaders don’t have confidence in the accuracy and timeliness of security data provided to regulators, then the same holds true for the confidence in their own ability to understand and combat cyber risks.
41% of risk leaders feel ‘very confident’ that they can fulfill the security-related requests of a regulator in a timely manner. 27.5% are ‘very satisfied’ that their organization’s security reports align to regulatory compliance needs.
GRC leaders cited their top challenges in fulfilling regulator requests, as:
- Getting access to accurate data (35%)
- The number of report requests (29%)
- The length of time it takes to get information from security team (26%)
The limitations of traditional GRC tools
The issue has been perpetuated by the limitations of traditional GRC tools, which rely on qualitative questionnaires to provide evidence of compliance. This does not reflect the current challenges from cyber.
92% of senior risk and compliance professionals believe it would be valuable to have quantitative security controls assurance reporting (vs qualitative) and 93.5% believe it’s important to automate security risk and compliance reporting. However, only 11% state that their risk and compliance reporting is currently automated end to end.
96% said it is important to prioritize security risk remediation based on its impact to the business, but most can’t isolate risk to critical business processes composed of people, applications, devices. Only 33.5% of respondents are ‘very confident’ in their ability to understand all the asset inventories.
Charaka Goonatilake, CTO, Panaseer: “Faced with increasing requests from regulators, GRC leaders have resorted to throwing a lot of people at time-sensitive requests. These manual processes combined with lack of GRC tool scalability necessitates data sampling, which means they cannot have complete visibility or full confidence in the data they are providing.
“The challenge is being exacerbated by new risks introduced by IoT sensors and endpoints, which rarely consider security a core requirement and therefore introduce greater risk and increase the importance of controls and mitigations to address them.”
Andreas Wuchner, Panaseer Advisory Board member: “To face the new reality of cyberthreats and regulatory pressures requires many organizations need to fundamentally rethink traditional tools and defences.
“GRC leaders can enhance their confidence to accurately and quickly meet stakeholder needs by implementing Continuous Controls Monitoring, an emerging category of security and risk, which has just been recognised in the 2020 Gartner Risk Management Hype Cycle.”
80% of organizations experienced a cybersecurity breach that originated from vulnerabilities in their vendor ecosystem in the past 12 months, and the average organization had been breached in this way 2.7 times, according to a BlueVoyant survey.
The research also found organizations are experiencing multiple pain points across their cyber risk management program as they aim to mitigate risk across a network that typically encompasses 1409 vendors.
The study was conducted by Opinion Matters and recorded the views and experiences of 1505 CIOs, CISOs and Chief Procurement Officers in organizations with more than 1000 employees across a range of vertical sectors including business and professional services, financial services, healthcare and pharmaceutical, manufacturing, utilities and energy. It covered five countries: USA, UK, Mexico, Switzerland and Singapore.
Third-party cyber risk budgets and other key findings
- 29% say they have no way of knowing if cyber risk emerges in a third-party vendor
- 22.5% monitor their entire supply chain
- 32% only re-assess and report their vendor’s cyber risk position either six-monthly or less frequently
- The average headcount in internal and external cyber risk management teams is 12
- 81% say that budgets for third-party cyber risk management are increasing, by an average figure of 40%
Commenting on the research findings, Jim Penrose, COO BlueVoyant, said: “That four in five organizations have experienced recent cybersecurity breaches originating in their vendor ecosystem is of huge concern.
“The research clearly indicated the reasons behind this high breach frequency: only 23% are monitoring all suppliers, meaning 77% have limited visibility and almost one-third only re-assess their vendors’ cyber risk position six-monthly or annually. That means in the intervening period they are effectively flying blind to risks that could emerge at any moment in the prevailing cyber threat environment.”
Multiple pain points exist in third-party cyber risk programs as budgets rise
Further insight into the difficulties that are leading to breaches was revealed when respondents were asked to identify the top three pain points related to their third-party cyber risk programs, in the past 12 months.
The most common problems were:
- Managing the volume of alerts generated by the program
- Working with suppliers to improve security performance, and
- Prioritizing which risks are urgent and which are not.
However, overall responses were almost equally spread across thirteen different areas of concern. In response to these issues, budgets for third-party cyber risk programs are set to rise in the coming year. 81% of survey respondents said they expect to see budgets increase, by 40% on average.
Jim Penrose continues: “The fact that cyber risk management professionals are reporting difficulties across the board shows the complexity they face in trying to improve performance.
“It is encouraging that budget is being committed to tackling the problem, but with so many issues to solve many organizations will find it hard to know where to start. Certainly, the current approach is not working, so simply trying to do more of the same will not shift the dial on third-party cyber risk.”
Variation across industry sectors
Analysis of the responses from different commercial sectors revealed considerable variations in their experiences of third-party cyber risk. The business services sector is suffering the highest rate of breaches, with 89% saying they have been breached via a weakness in a third-party in the past 12 months.
The average number of incidents experienced in the past 12 months was also highest in this sector, at 3.6. This is undoubtedly partly down to the fact that firms in the sector reported working with 2572 vendors, on average.
In contrast, only 57% of respondents from the manufacturing sector said they had suffered third-party cyber breaches in the past 12 months. The sector works with 1325 vendors on average, but had a much lower breach frequency, at 1.7.
“Thirteen percent of respondents from the manufacturing sector also reported having no pain points in their third-party cyber risk management programs, a percentage more than twice as high as any other sector.
Commenting on the stark differences observed between sectors, Jim Penrose said: “This underlines that there is no one-size-fits-all solution to managing third-party cyber risk.
“Different industries have different needs and are at varying stages of maturity in their cyber risk management programs. This must be factored into attempts to improve performance so that investment is directed where it has the greatest impact.”
Mix of tools and tactics in play
The survey investigated the tools organizations have in place to implement third-party cyber risk management and found a mix of approaches with no single approach dominating.
Many organizations are evolving towards a data-driven strategy, with supplier risk data and analytics in use by 40%. However static, point-in-time tactics such as on-site audits and supplier questionnaires remain common.
Jim Penrose concludes: “Overall the research findings indicate a situation where the large scale of vendor ecosystems and the fast-changing threat environment is defeating attempts to effectively manage third-party cyber risk in a meaningful way.
“Visibility into such a large and heterogenous group of vendors is obscured due to lack of resources and a continuing reliance on manual, point-in-time processes, meaning real-time emerging cyber risk is invisible for much of the time.
“For organizations to make meaningful progress in managing third-party cyber risk and reduce the current concerning rate of breaches, they need to be pursuing greater visibility across their vendor ecosystem and achieving better context around alerts so they can be prioritized, triaged and quickly remediated with suppliers.”
BigPanda revealed the results of an IDG Research survey conducted in the early days of the pandemic. The study explores challenges IT Ops, NOC, DevOps and SRE teams face as their organizations race to capture the digital-led market.
The results of the survey show that, in addition to managing complex and ever-changing IT environments with many different tools, teams are now plagued with an increasing volume of IT incidents and outages which results in customer churn and costly service outages.
“An influx of data from multiple tools, coupled with low levels of automation, can have a paralyzing effect on IT incident management processes,” said Jen Garofalo, IDG’s Research Director.
“More than 40% of respondents indicate IT incident remediation is handled with a mix of manual and automated processes, while another 20% report these processes are mostly manual.”
Complex environments lead to longer incident management cycles
22% of respondents have 20 or more distinct IT teams supporting the different IT and business services at their organizations. On average, enterprises use 20 different monitoring and observability tools to detect potential issues with infrastructure, applications and services.
The average respondent reports that infrastructure is hosted in more than one location including on-premises infrastructure (60%), public cloud (57%), private cloud (47%) and commercial data centers (24%).
47% of IT Ops professionals said coordinating IT incident or outage detection, analysis, and response across siloed IT teams is the biggest challenge they face. Reasons why include:
- More than 14,000 alerts are generated from IT monitoring tools on average, and 65% of respondents report that alerts have increased in frequency over the past 12 months.
- 44% of alerts are caused by infrastructure or software changes made by someone in the organization who doesn’t have visibility across all systems to understand the impact of their change.
- Respondents report an average of 12 hours to determine the root cause of a P1 (major) incident.
- Further, the survey uncovered the largest business impacts of IT incident management challenges, including increased operating costs (43%), delays in time to market (42%) and decreased IT Ops productivity (41%).
While all of this is happening, more applications are being built and put into production — 74% of respondents expect Development/DevOps workloads to increase over the next 12 months, with 30% expecting a significant increase.
“For a variety of reasons, the COVID-19 pandemic is accelerating the pace at which enterprises are digitally transforming. This, in turn, increases the challenge facing IT Operations teams to keep their companies running smoothly,” said Assaf Resnick, CEO for BigPanda.
“The IDG report clearly shows that corporate executives are not just driving business teams to increase their digital footprint – they are doubling-down on IT’s parallel effort to adopt AI and automation in order to support those new revenue-generating initiatives.”
Budgets for IT operations expected to increase
79% of respondents expect budgets for IT operations to increase over the next 12 months (34% significantly, 45% somewhat). This will be reflected in multiple areas including automating IT incident management, increasing communication/knowledge sharing and improving IT monitoring and event correlation, all of which were cited by more than 50% of respondents.
Meanwhile, most respondents have heard the term AIOps, and 44% are considering or already have a solution with AIOps in place. Those who are considering or already have a solution with AIOps in place are most likely to leverage it to automate IT incident response.
Overall, respondents are most interested in the potential to leverage AIOps to accelerate IT incident and outage resolution.
In the end, the survey confirmed that modern and constantly evolving IT environments require a best-of-breed IT operations toolkit.
71% of CISOs believe cyberwarfare is a threat to their organization, and yet 22% admit to not having a strategy in place to mitigate this risk. This is especially alarming during a period of unprecedented global disruption, as 50% of infosec professionals agree that the increase of cyberwarfare will be detrimental to the economy in the next 12 months.
CISOs and infosec professionals however are shoring up their defenses — with 51% and 48% respectively stating that they believe they will need a strategy against cyberwarfare in the next 12-18 months.
These findings, and more, are revealed in Bitdefender’s global 10 in 10 Study, which highlights how, in the next 10 years, cybersecurity success lies in the adaptability of security decision makers, while simultaneously looking back into the last decade to see if valuable lessons have already been learnt about the need to make tangible changes in areas such as diversity.
It explores, in detail, the gap between how security decision makers and infosec professionals view the current security landscape and reveals the changes they know they will need to make in the upcoming months and years of the 2020s.
The study takes into account the views and opinions of more than 6,724 infosec professionals representing a broad cross-section of organizations from small 101+ employee businesses to publicly listed 10,000+ person enterprises in a wide variety of industries, including technology, finance, healthcare and government.
The rise and fall (and rise again) of ransomware
Outside of the rise of cyberwarfare threats, an old threat is rearing its head — ransomware. During the disruption of 2020, ransomware has surged with as much as 43% of infosec professionals reporting that they are seeing a rise in ransomware attacks.
What’s more concerning is that 70% of CISOs/CIOs and 63% of infosec professionals expect to see an increase in ransomware attacks in the next 12-18 months. This is of particular interest as 49% of CISOs/CIOs and 42% of infosec professionals are worried that a ransomware attack could wipe out the business in the next 12-18 months if they don’t increase investment in security.
But what is driving the rise in ransomware attacks? Some suggest it’s because more people are working from home — which makes them an easier target outside of the corporate firewall. The truth might however be tied to money.
59% of CISOs/CIOs and 50% of infosec professionals believe that the business they work for would pay the ransom in order to prevent its data/information from being published — making ransomware a potential cash cow.
A step change in communication is in high demand
Cyberwarfare and ransomware are complex topics to unpack, amongst many others in infosec. The inherent complexity of infosec topics does however make it hard to gain internal investment and support for projects. This is why infosec professionals believe a change is needed.
In fact, 51% of infosec professionals agree that in order to increase investment in cybersecurity, the way that they communicate about security has to change dramatically. This number jumps up to 55% amongst CISOs and CIOs — many of whom have a seat at the most senior decision-making table in their organizations.
The question is, what changes need to be made? 41% of infosec professionals believe that in the future more communication with the wider public and customers is needed so everyone, both in and organization and outside, better understands the risks.
In addition, 38% point out that there is a need for the facilitation of better communication with the C-suite, especially when it comes to understanding the wider business risks.
And last, but not least, as much as 31% of infosec professionals believe using less technical language would help the industry communicate better, so that the whole organization could understand the risks and how to stay protected.
“The reason that 63% of infosec professionals believe that cyberwarfare is a threat to their organization is easy,” said Neeraj Suri, Distinguished Professorship and Chair in Cybersecurity at Lancaster University.
“Dependency on technology is at an all-time high and if someone was to take out the WiFi in a home or office, no one would be able to do anything. This dependency wasn’t there a few years back–it wasn’t even as high a few months back.
“This high dependency on technology doesn’t just open the door for ransomware or IoT threats on an individual level, but also to cyberwarfare which can be so catastrophic it can ruin economies.
“The reason that nearly a quarter of infosec pros don’t currently have a strategy to protect against cyberwarfare is likely because of complacency. Since they haven’t suffered an attack or haven’t seen on a wide scale–the damage that can be done–they haven’t invested the time in protecting against it.”
Diversity, and specifically neurodiversity, is key to future success
Outside of the drastic changes that are needed in the way cybersecurity professionals communicate, there’s also a need to make a change within the very makeup of the workforce. The infosec industry as a whole has long suffered from a skills shortage, and this looks to remain an ongoing and increasingly obvious issue.
15% of infosec professionals believe that the biggest development in cybersecurity over the next 12-18 months will be the skills gap increasing. If the skills deficit continues for another five years, 28% of CISOs and CIOs say they believe that it will destroy businesses.
And another 50% of infosec professionals believe that the skills gap will be seriously disruptive if it continues for the next 5 years.
Today, however, it will take more than just recruiting skilled workers to make a positive change and protect organizations. In 2015, 52% of infosec workers would have agreed that there is a lack of diversity in cybersecurity and that it’s a concern.
Five years later, in 2020, this remains exactly the same — and that is a significant problem as 40% of CISOs/CIOs and infosec professionals say that the cybersecurity industry should reflect the society around it to be effective.
What’s more, 76% of CISOs/CIOs, and 72% of infosec professionals, believe that there is a need for a more diverse skill set among those tackling cybersecurity tasks. This is because 38% of infosec professionals say that neurodiversity will make cybersecurity defenses stronger, and 33% revealed a more neurodiverse workforce will level the playing field against bad actors.
While it’s clear that the cybersecurity skills gap is here to stay, it’s also clear why changes need to be made to the makeup of the industry.
Liviu Arsene, Global Cybersecurity Researcher at Bitdefender concludes, “2020 has been a year of change, not only for the world at large, but for the security industry. The security landscape is rapidly evolving as it tries to adapt to the new normal, from distributed workforces to new threats. Amongst the new threats is cyberwarfare.
“It’s of great concern to businesses and the economy — and yet not everyone is prepared for it. At the same time, infosec professionals have had to keep up with new threats from an old source, ransomware, that can affect companies’ bottom lines if not handled carefully.
“The one thing we know is that the security landscape will continue to evolve. Changes will happen, but we can now make sure they happen for better and not for worse. To succeed in the new security landscape, the way we as an industry talk about security has to become more accessible to a wider audience to gain support and investment from within the business.
“In addition, we have to start thinking about plugging the skills gap in a different way — we have to focus on diversity, and specifically neurodiversity, if we are to stand our ground and ultimately defeat bad actors.”
Only 44% of healthcare providers, including hospital and health systems, conformed to protocols outlined by the NIST CSF – with scores in some cases trending backwards since 2017, CynergisTek reveals.
Healthcare providers and NIST CSF
Analysts examined nearly 300 assessments of provider facilities across the continuum, including hospitals, physician practices, ACOs and Business Associates.
The report also found that healthcare supply chain security is one of the lowest ranked areas for NIST CSF conformance. This is a critical weakness, given that COVID-19 demonstrated just how broken the healthcare supply chain really is with providers buying PPE from unvetted suppliers.
“We found healthcare organizations continue to enhance and improve their programs year-over-year. The problem is they are not investing fast enough relative to an innovative and well-resourced adversary,” said Caleb Barlow, CEO of CynergisTek.
“These issues, combined with the rapid onset of remote work, accelerated deployment of telemedicine and impending openness of EHRs and interoperability, have set us on a path where investments need to be made now to shore up America’s health system.
“However, the report isn’t all doom and gloom. Organizations that have invested in their programs and had regular risk assessments, devised a plan, addressed prioritized issues stemming from the assessments and leveraged proven strategies like hiring the right staff and evidence-based tools have seen significant improvements to their NIST CSF conformance scores.”
Bigger budgets don’t mean better security performance
The report revealed bigger healthcare institutions with bigger budgets didn’t necessarily perform better when it comes to security, and in some cases, performed worse than smaller organizations or those that invested less.
In some cases, this was a direct result of consolidation where systems directly connect to newly-acquired hospitals without first shoring up their security posture and conducting a compromise assessment.
“What our report has uncovered over recent years is that healthcare is still behind the curve on security. While healthcare’s focus on information security has increased over the last 15 years, investment is still lagging. In the age of remote working and an attack surface that has exponentially grown, simply maintaining a security status quo won’t cut it,” said David Finn, EVP of Strategic Innovation at CynergisTek.
“The good news is that issues emerging in our assessments are largely addressable. The bad news is that it is going to require investment in an industry still struggling with financial losses from COVID-19.”
Leading factors influencing performance include poor security planning and lack of organizational focus, inadequate reporting structures and funding, confusion around priorities, lack of staff and no clear plan.
Key strategies to bolster healthcare security and achieve success
Look under the hood at security and privacy amid mergers and acquisitions: For health systems planning to integrate new organizations into the fold through mergers and acquisitions, leadership should look under the hood and be more diligent when examining the organization’s security and privacy infrastructure, measures and performance.
It’s important to understand their books and revenue streams as well as their potential security risks and gaps to prevent these issues from becoming liabilities.
Make security an enterprise priority: While other sectors like finance and aerospace have treated security as an enterprise-level priority, healthcare must also make this kind of commitment.
Understanding how these risks tie to the bigger picture will help an organization that thinks it cannot afford to invest in privacy and information security risk management activities understand why making such an investment is crucial.
Hospitals and healthcare organizations should create collaborative, cross-functional task forces like enterprise response teams, which offer other business units an eye-opening look into how security and privacy touch all parts of the business including financial, HR, and more.
Money isn’t a solution: Just throwing money at a problem doesn’t work. Security leaders need to identify priorities and have a plan which leverages talent, tried and true strategies like multi-factor authentication, privileged access management and on-going staff training to truly up level their defenses and take a more holistic approach, especially when bringing on new services such as telehealth.
Accelerate the move to cloud: While healthcare has traditionally been slow to adopt the cloud, these solutions provide the agility and scalability that can help leaders cope with situations like COVID-19, and other crises more effectively.
Shore up security posture: We frequently learn the hard way that security can disrupt workflow. COVID-19 taught us that workflow can also disrupt security and things are going to get worse before getting better. Get an assessment quickly to determine immediate needs and coming up with a game plan to bolster defenses needed in this next normal.
Hackers are targeting everyone and taking advantage of fear, uncertainty, and a 24/7 news cycle that can dwell on a single theme for weeks on end. The victim pool includes everyone from the global remote workforce (some working in industries that didn’t know remote work was even feasible), to essential workers in labs working on vaccines or treatment plans for COVID-19.
According to Microsoft, phishing and social engineering attacks have jumped to 30,000 a day, and extremely sophisticated levels of ransomware attacks are up 800%. Ransomware’s latest tactic is a conversion to doxware. Attackers steal company data before encrypting it and threaten to reveal that your organization has been hacked and that sensitive customer data has been compromised. So even if you have backups and don’t pay the hackers, your reputation is still at risk.
As ransomware attacks become more frequent, IT and information security leaders often end up pointing fingers at each other after a cyber-attack. And there are many fingers in the room, adding to the chaos, trying to avoid responsibility, and deflecting ownership of the problem to other stakeholders.
The CISO has the biggest finger, but should point carefully
A recent WSJ article talked about how CISOs are now being elevated to corporate leadership roles. We are currently witnessing a growing epidemic of cyber risk. Today more than ever, CISOs can use their influence to do more than just drive technological change by piercing the silos across the enterprise.
But it’s going to take a completely different method of communicating. The outcome must be seen much faster and it must clearly demonstrate greater cyber maturity and resilience in such a way that it can’t be disputed. In a nutshell, this means that cybersecurity must be spoken about in business terms, in dollars and cents, not bits and bytes.
This has often not been the case. Before the pandemic, it wasn’t unusual for a CISO to walk into a CFO’s office and have a budget conversation with a color quadrant of red, yellow, and green. Security vulnerabilities in red needed the most attention and would require immediate investment. Success would mean having less red and yellow on the chart. Vying for this type of security progress through vague risk reduction was enough to get approval for the latest technology and address control deficiencies and alleviate other impending threats.
The days of vague cyber plans and investments are over
In June, the International Monetary Fund forecasted that the global GDP will suffer a 4.9 percent contraction this year.
American credit rating agency Fitch Ratings announced that the number of defaults in the first five months exceeded the total for 2019 and that the pandemic fallout will erase $5 trillion more. There is no doubt that budgets will be more closely scrutinized in this global contraction. In 2020 and beyond, an entire cybersecurity program must answer the critical question: “Can you put a number on this technology investment?”
Choose the right tools
In order to validate cyber investment with a cyber budget holder, one must first understand cyber event types the organization may face and the range of business assets and operations in question.
Conversations around cyber risk management are often centered around estimating both the probability and impact of a risk event. Using cyber risk analysis centered around probability is alluring because we all want to know the future. When you can predict your cyber future, it becomes very easy to prioritize what risks require more attention. So, considering that most organizations have limited resources, one magic number can give leaders confidence in how their cybersecurity programs are optimized and make them look good to leadership across the enterprise. It seems like a good approach now with shrinking budgets.
However, it’s not enough.
A focus on probability can be misleading and even perilous for analyzing high-impact low-frequency events, such as a large data breach or data destruction event. The tools a leader chooses should look at the big picture in a collaborative and flexible manner that includes input from the entire enterprise. This will allow decisions to be made faster and more accurately.
I’d recommend an approach to cyber risk investment grounded in financial impact analysis, that allows leaders from every business unit to weigh in on what operations and outcomes the company needs to prioritize and determine plausible cyber incidents that could disrupt business operations and their assets.
These financial impacts help inform business decisions such as insurance purchases, investing in controls and more. These costs should be categorized depending on who is affected (and what type of impact it is). And the company should be able to optimize the entire portfolio of controls by playing out how changing one or more controls will impact their exposure. With this kind of methodology, a CISO can quickly determine if it’s cheaper to implement a control or buy insurance or put a number on impact (and sleep better at night if it’s relatively low).
CISOs now have a golden opportunity to take advantage of their publicity and show the organization (and the world) that even in times of uncertainty, cybersecurity investment can be managed quickly and bring a much-needed structure in these times.
The cybersecurity skills shortage means that many organizations are in urgent need of talented and experienced security professionals. This has been intensified by the pandemic, with security teams stretched to breaking point trying to secure new remote working regimes against the influx of opportunistic cyberattacks. There is a human cost to this high-pressure environment and new research from SIRP shows that the additional burdens placed on security operations center (SOC) teams due to COVID-19 has … More
The post Security teams stretched to breaking point trying to secure new remote working regimes appeared first on Help Net Security.
Aligning security and delivery at a strategic level is one of the most complex challenges for executives. It starts with an understanding that risk-based thinking should not be perceived as an overhead or tax, but a value added component of creating a high-quality product or service.
One solution is balanced development automation, which is about aligning automated DevOps (development and IT operations) pipelines with business risk and compliance. To attain this, alignment must be achieved between risk and business teams at two different levels:
1. Strategic level (CEO, COO, CFO, CRO, CIO, DPO)
2. Operational level (DevOps engineers, risk engineers)
The strategic level is more focused on delivery of business value, customer needs, risk, regulations, compliance, and so on. The operational level is focused on aligning to governance protocols like risk thresholds, delivery timelines, and automation during the build phases of business value creation.
Achieving alignment at the strategic level
At the executive level, both sides of business and risk need to concentrate on quality first – only then does it make sense to go about balancing risk and speed. Otherwise, risk and speed wind up as the only concerns and that risks poor quality showing up in products and services at the end of the line.
The end of the line in any process is where the actual customer that receives the value from a product or service experiences the touchpoint with your portfolio of valued items. It is there that perceived value needs to have the appropriate operational indicators. Some refer to these as customer-driven metrics. These are the ones that can measure Operational Key Results in alignment with operational risk metrics.
Once executive alignment is achieved on quality, the next step is to measure against key strategic customer metrics like attrition and satisfaction. This gives an indication of the value customers receive from a product or service. Organizations should think about appropriate high level metrics and measurements at the end of the development lifecycle, risk thresholds, and how these map to their customer. I consider these as the “parent” metrics.
After that, consider “child” metrics in the plan, delivery, and operation of DevOps – from here, governance and speed will come into play. A key problem today is the self-attestation audit activity at the end of the line process, which is hard to validate. This just doesn’t integrate well with a DevOps process because the measurement is reactive and coming too far down the pipeline. Worse yet, going back and fixing risk issues later on gets perceived as getting in the way. What needs to happen is a shift to the left of the development process where risk is measured early and often.
As organizations evolve into a more digital set of processes, this shift left is critical to understanding those key measurements from the beginning of the lifecycle. Otherwise, junk at the beginning will just automate junk faster all the way down the line. Eventually, there will be a higher price to pay for poor quality.
Achieving alignment at the operational level
Operationally, challenges stem from misalignment in understanding who the end customer really is. Companies often design products and services for themselves and not for the end customer. Once an organization focuses on the end user and how they are going to use that product and service, the shift in thinking occurs. Now it’s about looking at what activities need to be done to provide value to that end customer.
Thinking this way, there will be features, functions, and processes never done before. In the words of Stephen Covey, “Keep the main thing the main thing”. What is the main thing? The customer. What features and functionality do you need for each of them from a value perspective? And you need to add governance to that.
Effective governance ensures delivery of a quality product or service that meets your objectives without monetary or punitive pain. The end customer benefits from that product or service having effective and efficient governance.
That said, heavy governance is also waste. There has to be a tension and a flow or a balance between Hierarchical Governance and Self Governance where the role of every person in the organization is clearly aligned in their understanding of value contributed to the end customer. With that, employees and contractors alike feel empowered and purposeful in their work and contributions.
Once the customer value proposition is clearly identified, organizations can identify how day to day operations contribute value to that end customer in an efficient way. This is where lean thinking helps, looking for ways to reduce waste in the value creation process. If something is not a part of the value proposition, is it necessary? If something is missing that would add significant value, how can we add it? This will lead to an alignment that drives value creation.
Delivering on DevOps speed is no longer good enough. Organizations also need to balance the need for speed against regulatory, compliance, and security concerns—and we need to do this fast and first. If a firm can’t get there fast through re-structure of an operating model and associated skills, it is best to have SCRUM Masters trained in LEAN and Six Sigma, TOGAF, and assorted Cybersecurity GRC Frameworks to helps you through iterations. I call that the big “Iterative, Fast and First” (IFF) principle of GRC by Design.
Are the activities an organization is conducting offering something of value to the business? Answering this question has implications for both strategic and operational teams. The business value context sets up alignment with the end customer and drives value at each stage through balanced development automation.
One of the cornerstones of a security leader’s job is to successfully evaluate risk. A risk assessment is a thorough look at everything that can impact the security of an organization. When a CISO determines the potential issues and their severity, measures can be put in place to prevent harm from happening.
To select a suitable risk assessment solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Jaymin Desai, Offering Manager, OneTrust
First, consider what type of assessments or control content as frameworks, laws, and standards are readily available for your business (e.g., NIST, ISO, CSA CAIQ, SIG, HIPAA, PCI DSS, NYDFS, GDPR, EBA, CCPA). This is an area where you can leverage templates to bypass building and updating your own custom records.
Second, consider the assessment formats. Look for a technology that can automate workflows to support consistency and streamline completion. This level of standardization helps businesses scale risk assessments to the line of business users. A by-product of workflow-based structured evaluations is the ability to improve your reporting with reliable and timely insights.
One other key consideration is how the risk assessment solution can scale with your business? This is important in evaluating your efficiencies overtime. Are the assessments static exports to excel, or can they be integrated into a live risk register? Can you map insights gathered from responses to adjust risk across your assets, processes, vendors, and more? Consider the core data structure and how you can model and adjust it as your business changes and your risk management program matures.
The solution should enable you to discover, remediate, and monitor granular risks in a single, easy-to-use dashboard while engaging with the first line of your business to keep risk data current and context-rich with today’s information.
Brenda Ferraro, VP of Third Party Risk, Prevalent
The right risk assessment solution will drive program maturity from compliance, to data breach avoidance, to third-party risk management.
There are seven key fundamentals that must be considered:
- Network repository: Uses the ‘fill out once, use with many approach’ to rapidly obtain risk information awareness.
- Vendor risk visibility: Harmonizes inside-out and outside-in vendor risk and proactively shares actionable insights to enhanced decision-making on prioritization, remediation, and compliance.
- Flexible automation: Helps the enterprise to place focus quickly and accurately on risk management, not administrative tasks, to reduce third-party risk management process costs.
- Enables scalability: Adapts to changing processes, risks, and business needs.
- Tangible ROI: Reduces time and costs associated with the vendor management lifecycle to justify cost.
- Advisory and managed services: Has subject matter experts to assist with improving your program by leveraging the solution.
- Reporting and dashboards: Provides real-time intelligence to drive more informed, risk-based decisions internally and externally at every business level.
The right risk assessment solution selection will enable dynamic evolution for you and your vendors by using real-time visibility into vendor risks, more automation and integration to speed your vendor assessments, and by applying an agile, process-driven approach to successfully adapt and scale your program to meet future demands.
Fred Kneip, CEO, CyberGRX
Organizations should look for a scalable risk assessment solution that has the ability to deliver informed risk-reducing decision making. To be truly valuable, risk assessments need to go beyond lengthy questionnaires that serve as a check the box exercises that don’t provide insight and they need to go beyond a simple outside in rating that, alone, can be misleading.
Rather, risk assessments should help you to collect accurate and validated risk data that enables decision making, and ultimately, allow you to identify and reduce risk ecosystem at the individual level as well as the portfolio level.
Optimal solutions will help you identify which vendors pose the greatest risk and require immediate attention as well as the tools and data that you need to tell a complete story about an organization’s third-party cyber risk efforts. They should also help leadership understand whether risk management efforts are improving the organization’s risk posture and if the organization is more or less vulnerable to an adverse cyber incident than it was last month.
Jake Olcott, VP of Government Affairs, BitSight
Organizations are now being held accountable for the performance of their cybersecurity programs, and ensuring businesses have a strong risk assessment strategy in place can have a major impact. The best risk assessment solutions meet four specific criteria— they are automated, continuous, comprehensive and cost-effective.
Leveraging automation for risk assessments means that the technology is taking the brunt of the workload, giving security teams more time back to focus on other important tasks to the business. Risk assessments should be continuous as well. Taking a point-in-time approach is inadequate, and does not provide the full picture, so it’s important that assessments are delivered on an ongoing basis.
Risk assessments also need to be comprehensive and cover the full breadth of the business including third and fourth party risks, and address the expanding attack surface that comes with working from home.
Lastly, risk assessments need to be cost-effective. As budgets are being heavily scrutinized across the board, ensuring that a risk assessment solution does not require significant resources can make a major impact for the business and allow organizations to maximize their budgets to address other areas of security.
Mads Pærregaard, CEO, Human Risks
When you pick a risk assessment tool, you should look for three key elements to ensure a value-adding and effective risk management program:
1. Reduce reliance on manual processes
2. Reduce complexity for stakeholders
3. Improve communication
Tools that rely on constant manual data entry, remembering to make updates and a complicated risk methodology will likely lead to outdated information and errors, meaning valuable time is lost and decisions are made too late or on the wrong basis.
Tools that automate processes and data gathering give you awareness of critical incidents faster, reducing response times. They also reduce dependency on a few key individuals that might otherwise have responsibility for updating information, which can be a major point of vulnerability.
Often, non-risk management professionals are involved with or responsible for implementation of mitigating measures. Look for tools that are user-friendly and intuitive, so it takes little training time and teams can hit the ground running.
Critically, you must be able to communicate the value that risk management provides to the organization. The right tool will help you keep it simple, and communicate key information using up-to-date data.
Steve Schlarman, Portfolio Strategist, RSA Security
Given the complexity of risk, risk management programs must rely on a solid technology infrastructure and a centralized platform is a key ingredient to success. Risk assessment processes need to share data and establish processes that promote a strong governance culture.
Choosing a risk management platform that can not only solve today’s tactical issues but also lay a foundation for long-term success is critical.
Business growth is interwoven with technology strategies and therefore risk assessments should connect both business and IT risk management processes. The technology solution should accelerate your strategy by providing elements such as data taxonomies, workflows and reports. Even with best practices within the technology, you will find areas where you need to modify the platform based on your unique needs.
The technology should make that easy. As you engage more front-line employees and cross-functional groups, you will need the flexibility to make adjustments. There are some common entry points to implement risk assessment strategies but you need the ability to pivot the technical infrastructure towards the direction your business needs.
You need a flexible platform to manage multiple dimensions of risk and choosing a solution provider with the right pedigree is a significant consideration. Today’s risks are too complex to be managed with a solution that’s just “good enough.”
Yair Solow, CEO, CyGov
The starting point for any business should be clarity on the frameworks they are looking to cover both from a risk and compliance perspective. You will want to be clear on what relevant use cases the platform can effectively address (internal risk, vendor risk, executive reporting and others).
Once this has been clarified, it is a question of weighing up a number of parameters. For a start, how quickly can you expect to see results? Will it take days, weeks, months or perhaps more? Businesses should also weigh up the quality of user experience, including how difficult the solution is to customize and deploy. In addition, it is worth considering the platform’s project management capabilities, such as efficient ticketing and workflow assignments.
Usability aside, there are of course several important factors when it comes to the output itself. Is the data produced by the solution in question automatically analyzed and visualized? Are the automatic workflows replacing manual processes? Ultimately, in order to assess the platform’s usefulness, businesses should also be asking to what extent the data is actionable, as that is the most important output.
This is not an exhaustive list, but these are certainly some of the fundamental questions any business should be asking when selecting a risk assessment solution.
A number of organizations face shortcomings in monitoring and securing their cloud environments, according to a Tripwire survey of 310 security professionals.
76% of security professionals state they have difficulty maintaining security configurations in the cloud, and 37% said their risk management capabilities in the cloud are worse compared with other parts of their environment. 93% are concerned about human error accidentally exposing their cloud data.
Few orgs assessing overall cloud security posture in real time
Attackers are known to run automated searches to find sensitive data exposed in the cloud, making it critical for organizations to monitor their cloud security posture on a recurring basis and fix issues immediately.
However, the report found that only 21% of organizations assess their overall cloud security posture in real time or near real time. While 21% said they conduct weekly evaluations, 58% do so only monthly or less frequently. Despite widespread worry about human errors, 22% still assess their cloud security posture manually.
“Security teams are dealing with much more complex environments, and it can be extremely difficult to stay on top of the growing cloud footprint without having the right strategy and resources in place,” said Tim Erlin, VP of product management and strategy at Tripwire.
“Fortunately, there are well-established frameworks, such as CIS benchmarks, which provide prioritized recommendations for securing the cloud. However, the ongoing work of maintaining proper security controls often goes undone or puts too much strain on resources, leading to human error.”
Utilizing a framework to secure the cloud
Most organizations utilize a framework for securing their cloud environments – CIS and NIST being two of the most popular – but only 22% said they are able to maintain continuous cloud security compliance over time.
While 91% of organizations have implemented some level of automated enforcement in the cloud, 92% still want to increase their level of automated enforcement.
Additional survey findings show that automation levels varied across cloud security best practices:
- Only 51% have automated solutions that ensure proper encryption settings are enabled for databases or storage buckets.
- 45% automatically assess new cloud assets as they are added to the environment.
- 51% have automated alerts with context for suspicious behavior.
New research shows almost three quarters of large businesses believe remote working policies introduced to help stop the spread of COVID-19 are making their companies more vulnerable to cyberattacks. You need to take steps to protect the remote workforce AT&T’s study of 800 cybersecurity professionals across the UK, France and Germany shows that while 88% initially felt well prepared for the migration, 55% now believe widespread remote working is making their companies more or much … More
The post Many companies have not taken basic steps to protect their remote workforce appeared first on Help Net Security.
Cybersecurity professionals know all too well that crises tend to breed new threats to organizational security. The current COVID-19 pandemic is evidence of this. Health agencies are being attacked, massive phishing operations are underway, and security flaws in leading communications platforms are coming to light.
Even on an individual basis, people are more susceptible to scams, fraud and manipulation in times of fear. From January 1 until today, the US Federal Trade Commission has received over 124,140 fraud and ID theft reports related to COVID-19, with people reporting losses upwards of $80.3 million dollars.
Despite the presence of a robust cybersecurity infrastructure, enterprise systems are not battle-tested to secure an entire workforce that is now based at home. Cybersecurity analysts can confirm that to properly manage a remote digital workforce, an enterprise should focus its security measures on three key pillars:
1. Doubling up on identity access management: Enacting multifactor authentication and cycling passwords are critically important during times of crisis when phishing attempts spike and malicious hackers have an avenue into company data and resources.
2. Broaden connectivity awareness: Shield employees from parallel Wi-Fi networks set up by bad actors by increasing IT awareness and broadening VPN access. Unaware employees that connect to the parallel (rogue) network by mistake can put the company at risk.
3. Reassess policies and procedures: Companies operating today are in unfamiliar territory and should always be reassessing current cyber risk policies and procedures in order to identify and evaluate and identify risks associated with potential threats and security weaknesses.
Overcoming security challenges in a crisis
As we’ve seen with COVID-19, a crisis can disrupt business significantly. Without plans for how to deal with such a disruption, businesses will face an overwhelming challenge of managing and securing network infrastructure as operations shift to accommodate changes within the organization. It is paramount that enterprises determine ahead of time what to do differently, should a time of crisis rear its head. This also translates into a major opportunity for security teams that can proactively begin to analyze current security measures and develop a business plan of what the future might look like.
As part of this plan, automation and artificial intelligence (AI) should take center-stage. Most modern networks are growing far too complex for humans to secure manually, and fighting a growing number of threats requires automated operational workflows and integrated threat intelligence.
In addition, a high degree of system integration with these technologies enables greater collaboration between security analysts, no matter where they’re located. It is also important to embed threat intelligence across multiple vectors (e.g. endpoints, privileged user access, machine communications), so that Communication Service Providers (CSPs) can detect and analyze potential threats in real time.
Security teams that have integrated their networks with automated, cognitively intelligent software, whether it be AI or machine learning (ML), have already been privy to its benefits. With access to dynamic scanning for threats and insight into potential vulnerabilities, teams can tackle challenges quickly, with more visibility and effectiveness.
These new software capabilities enable security operations teams to:
- Oversee, manage and limit access to key operational systems and assets within the network to ensure that remote employees do not inadvertently or deliberately misuse privileged information.
- Identify network vulnerabilities automatically, detect threats sooner, and reduce the number of false positives, saving time and preventing alert fatigue.
- Flag and respond immediately to cyberattacks, minimizing the time needed to address each incident and the overall impact.
Automation and cognitive intelligence are critical to guarding enterprise infrastructure against scams, spear-phishing and zero-day attacks that can evade traditional signature-based security. By adopting these capabilities today, CSPs can set themselves up for longer-term networking success. With the rise of 5G, implementing strong security policies and procedures for complex networks has become more critical than ever. Through software that utilizes automation, AI and ML, operators can provide end-to-end quality across a diverse range of security use cases and business models in 5G.
Microsoft has released (in public preview) several new enterprise security offerings to help companies meet the challenges of remote work.
Double Key Encryption for Microsoft 365
Secure information sharing is always a challenge, and Microsoft thinks it has the right solution for organizations in highly regulated industries (e.g., financial services, healthcare).
“Double Key Encryption (…) uses two keys to protect your data—one key in your control, and a second key is stored securely in Microsoft Azure. Viewing data protected with Double Key Encryption requires access to both keys. Since Microsoft can access only one of these keys, your protected data remains inaccessible to Microsoft, ensuring that you have full control over its privacy and security,” the company explained.
“You can host the Double Key Encryption service used to request your key, in a location of your choice (on-premises key management server or in the cloud) and maintain it as you would any other application.”
This Microsoft enterprise security solution allows organizations to migrate sensitive data to the cloud or share it via a cloud platform without relying solely on the provider’s encryption. Also, it makes sure that the cloud provider or collaborating third parties can’t have access to the sensitive data.
Microsoft Endpoint Data Loss Prevention
“Data Loss Prevention solutions help prevent data leaks and provide context-based policy enforcement for data at rest, in use, and in motion on-premises and in the cloud,” Alym Rayani, Senior Director, Microsoft 365, noted.
“Built into Windows 10, Microsoft Edge, and the Office apps, Endpoint DLP provides data-centric protection for sensitive information without the need for an additional agent, enabling you to prevent risky or inappropriate sharing, transfer, or use of sensitive data in accordance with your organization’s policies.”
Organizations can use it to prevent copying sensitive content to USB drives, printing of sensitive documents, uploading a sensitive file to a cloud service, an unallowed app accessing a sensitive file, etc.
When users attempt to do a risky action, they are alerted to the dangers and provided with a helpful explanation and guidance.
Insider Risk Management and Communication Compliance
Insider Risk Management is not a new offering from Microsoft, but has been augmented by new features that deliver new, quality insights related to the obfuscation, exfiltration, or infiltration of sensitive information.
“For those using Microsoft Defender Advanced Threat Protection (MDATP), we can now provide insights into whether someone is trying to evade security controls by disabling multi-factor authentication or installing unwanted software, which may indicate potentially malicious behavior,” explained Talhar Mir, Principal PM at Microsoft.
“Finally, one of the key early indicators as to whether someone may choose to participate in malicious activities is disgruntlement. In this release, we are further enhancing our native HR connector to allow organizations to choose whether they want to use additional HR insights that might indicate disgruntlement to initiate a policy.”
Communication Compliance has also been introduced earlier this year, but now offers enhanced insights and improved actions to help foster a culture of inclusion and safety within the organization.
The average $5 billion company incurs delays of roughly 5 weeks per year in new product launches due to missed risks, with a $99 million opportunity cost, according to Gartner.
Opportunity costs from missing risks
A survey of more than 382 strategic initiative leaders quantified the cost of missing risks in strategic initiatives. For an average $5 billion revenue company it amounts to $99 million annually in opportunity cost from delayed new product launches alone. Initiatives where unexpected risks are not surfaced and mitigated in a timely fashion are delayed by an average of five weeks per year.
Moreover, in a related survey of 111 emerging risk management (ERM) leaders just 6% felt that their organization’s risk response was timely during strategic initiatives.
“These findings show that risk response usually is not timely,” said Emily Riley, senior principal, research in the Gartner Audit and Risk practice. “But they also show the huge cost of an untimely response. The recent COVID-19 pandemic illustrates the need for an agile response to unexpected risks.”
Benefits of a timely risk response
Experts looked at how strategic initiatives performed against several measures and how this was affected by the timeliness of risk responses.
“The performance benefits of a timely risk response stand out clearly,” said Ms. Riley. “There’s a business opportunity here because ERM leaders expressed their desire to be more involved in supporting strategic initiative success.”
Seventy six percent of ERM heads said they wanted to increase the proportion of their time they spend on strategic initiatives. More than half said that their involvement should come at the earliest stages of a strategic initiative. Yet currently just 11% feel they are involved before an initiative’s execution.
Unexpected risks and information roadblocks
“The problem we often see is initiative teams are not getting the information they need to act on risks in a timely manner,” said Ms. Riley. “This is one area where ERM teams can add value.”
This can have several root causes. Sometimes many individuals are involved in an initiative without clear accountability to one another. There is also often a sensitivity to candidly sharing information about threats to high stakes projects. Another common cause is a focus on performance metrics that overshadows forward-looking considerations.
“ERM’s role should be to connect initiative teams with subject matter experts, to facilitate opportunities for anonymous sharing of concerns, and to develop risk indicators that consider leading indicators of project success,” said Ms. Riley.
Instituting an in-house cyber threat intelligence (CTI) program as part of the larger cybersecurity efforts can bring about many positive outcomes:
- The organization may naturally switch from a reactive cybersecurity posture to a predictive, proactive one.
- The security team may become more efficient and better prepared for detecting threats, preventing security incidents and data breaches, and reacting to active cyber intrusions.
- The exchange of pertinent threat intelligence with other organizations may improve collaboration and preparedness.
But these positive results are dependent of several things.
Some may think that, for example, cybersecurity is directly proportionate to the amount of threat intelligence they collect.
In reality, though, threat intelligence information can only serve their organization to the extent that they are able to digest the information and rapidly operationalize and deploy countermeasures.
“You may collect information on an ongoing or future threat to your organization to include who the threat actor is, what are they going after, what is the tactic they will utilize to get in your network, how are they going to move laterally, how are they going to exfil information and when will the activity take place. You can collect all the relevant threat information but without the infrastructure in place to analyze the large amount of data coming in, the organization will not succeed in successfully orienting themselves and acting upon the threat information,” Santiago Holley, Global Threat Intelligence Lead at Thermo Fisher Scientific, told Help Net Security.
Working towards a threat intelligence program
Holley has worked in multiple threat intelligence and cyber positions over the past ten years, including a stint as a Threat Intelligence Lead with the FBI, and this allows him to offer some advice to security leaders that have been tasked with setting up a robust threat intelligence program for their organization.
One of the first steps towards establishing a threat intelligence program is to know your risk tolerance and set your priorities early, he says. While doing that, it’s important to keep in mind that it’s not possible to prevent every potential threat.
“Understand what data is most important to you and prioritize your limited resources and staff to make workloads manageable and keep your company safe,” he advised.
“Once you know your risk tolerance you need to understand your environment and perform a comprehensive inventory of internal and external assets to include threat feeds that you have access to. Generally, nobody knows your organization better than your own operators, so do not go on a shopping spree for tools/services without an inventory of what you do/don’t have.
After all that’s out of the way, it’s time to automate security processes so that you can free your limited talented cybersecurity personnel and have them focus their efforts where they will be most effective.
“Always be on the lookout for passionate, qualified and knowledge-thirsty internal personnel that WANT to pivot to threat intelligence and develop them. Having someone that knows your organization, its culture, people and wants to grow goes a long way compared to the unknowns of bringing external talent,” he opined.
The importance of explaining risk
To those who are still fighting to get buy-in for a TI program from the organization’s executives and board members, he advises providing contextualized threat intelligence.
“You must put potential threats in terms that are meaningful to your audience such as how much risk a threat poses in terms of potential damage alongside which assets and data are at risk,” he explained.
“Many times business managers are focused on generating revenue and may see threat intelligence as an unnecessary expense. It is important for security leaders to communicate risk to their business managers and how those contribute to unnecessary cost and time delays if not addressed.”
He also advises getting to know the people they are working with and start building a professional working relationship. “The success of the program correlates to the strength of your team and how successful they are in collaborating and communicating with business managers.”
Cyber threat intelligence is one of the key tools information security operation centers (SOCs) use to carry out their mission. While helpful, it’s also one of the many little things that add to the mounting pile of stress SOC teams often feel.
SOC analysts are tasked with keeping up with the organization’s security needs and getting end users to understand cybersecurity risks and change their behavior, but are often dealing with an overwhelming workload and constant emergencies and disruptions that take analysts away from their primary tasks.
Burnout is often lurking and ready to “grab” SOC team members, so Holley advises them to implement a number of techniques to manage stress:
- Identify the problem. Understand what is specifically causing your stress in the first place, a good way of doing this is via root cause analysis. Peel the layers of the problem and understand the root
- Control your time. Take control of your time by blocking your calendar and give yourself time to focus on your own tasks and avoid being oversaturated with meetings
- Pick your battles. If you are going to go to war, make sure it is worth it. Avoid being dragged into confrontations that ultimately do not matter
- Stay healthy. Working out has many benefits when it comes to stress reduction, it gives you the opportunity to focus on something for YOU.
“Today’s cyber security environment is challenging and requires analysts to react to changes quickly and effectively. It seems that there is a never-ending demand on flexible intellectual skills and the ability to analyze information and integrate different sources of knowledge to address challenges,” Holley noted.
His own preferred thinking process for making the most appropriate decisions as quickly as possible is the OODA loop (Observe, Orient, Decide, Act).
“Risk management and being able to sort through large amounts of information and prioritize what needs to be actioned right away helps with problem solving. Keeping a cool head during difficult situations aids critical thinking but also allows for professional interactions with coworkers and stakeholders,” he concluded.
There’s an acute need for IoT risk management improvement, as most organizations do not know what tracking and safeguards their third parties have in place, according to the Shared Assessments Program and the Ponemon Institute.
“While the proliferation and consumerization of embedded technology, including IoT devices, continues to evolve at a rampant pace, new security vulnerabilities and exposures are introduced.
“This is especially true when the use of IoT devices is extended to third parties, fourth parties, or even more concerning, when it’s unknown where the use of IoT devices are being extended, or those extensions are unmanaged,” observes Rocco Grillo, Managing Director, Global Cyber Risk Services, Alvarez & Marsal.
Current IoT risk management programs are not keeping pace with the dramatic increase in IoT-related risks; a shortcoming that represents a clear and expanding threat to most organizations.
- The problem is fueled by the steep expansion in IoT devices, the lack of a centralized IoT risk management program, and the lack of senior-most authority’s involvement.
- Approximately one quarter of respondents self-report as higher performing organizations that are significantly more likely to implement leading risk management practices and apply them to IoT use. However, even these organizations need to enhance many aspects of their risk management capabilities.
“Clearly, the gap between understanding and practice must be closed, and quickly,” notes Charlie Miller, Senior Advisor, The Santa Fe Group, Shared Assessments Program.
“The study underscores a major disconnect between the authority and involvement that survey respondents say is needed from their Boards of Directors, and the actual governance exhibited today. It’s increasingly imperative that organizations get ahead of the problem and address IoT risks before a major disruptive event, not after one.”
As this study makes plain, swift and step function improvements are needed throughout most IoT risk management programs and third-party risk management in general. Areas ripe for action include governance, risk and asset management practices, and resource allocation.
Greenbone Networks revealed the findings of a research assessing critical infrastructure providers’ ability to operate during or in the wake of a cyberattack.
The cyber resilience of critical infrastructures
The research investigated the cyber resilience of organizations operating in the energy, finance, health, telecommunications, transport and water industries, located in the world’s five largest economies: UK, US, Germany, France and Japan. Of the 370 companies surveyed, only 36 percent had achieved a high level of cyber resilience.
To benchmark the cyber resilience of these critical infrastructures, the researchers assessed a number of criteria. These included their ability to manage a major cyberattack, their ability to mitigate the impact of an attack, whether they had the necessary skills to recover after an incident, as well as their best practices, policies and corporate culture.
Infrastructure providers in the US were the most likely to score highly, with 50 percent of companies considered highly resilient. In Europe, the figure was lower at 36 percent. In Japan, is was just 22 percent.
There were also marked differences between industry sectors, with highly-regulated organizations, such as finance and telecoms, most likely to be cyber resilient (both at 46 percent). Transport providers were the least likely to be considered highly resilient (22 percent), while energy providers (32 percent), health providers (34 percent) and water utilities (36 percent) were all close to the average.
Characteristics of a highly-resilient infrastructure provider
They are able to identify critical business processes, related assets and their vulnerabilities: Highly-resilient organizations thoroughly analyse their critical business processes and know which digital assets underpin these processes. They continuously check for vulnerabilities, taking appropriate measures to mitigate or close them.
They deploy cybersecurity architectures that are tailored to their business processes: This focus places them in a strong position to mitigate damage caused by an attack.
They have well-established and well-communicated best practices: The highest performing organizations have well-defined policies and best practices. For example, in 95 percent of highly-resilient organizations, the person responsible for managing a digital asset is also responsible for securing it. This level of expertise and responsibility allows organizations to close gaps and repair damage quickly.
They are more likely to seek third-party support: These companies are more likely to engage with specialist providers, not only to manage security technologies, but also to obtain advice.
For example, they might employ consultants to help develop a security strategy for the company, select suitable technology, implement managed security services, establish metrics for success or calculate the business case for a security project.
They place greater importance on the ability to respond to cyber incidents and mitigate the impact on critical business processes: The ability to prevent cyber incidents is of secondary importance to highly-resilient organizations as they recognize attacks are inevitable.
They are more likely to focus on procedures that lessen the impact of an attack or accelerate their ability to bounce back after an incident.
They prepare for attacks through simulation: They simulate various what-if scenarios in training sessions and also involve stakeholders outside the IT department. They also apply the same cybersecurity rules to all digital assets.
“Cyberattacks are inevitable so being able to firstly withstand them and then recover from them is vital. Nowhere is this more important than in the critical infrastructure industries where any loss or reduction in service could be devastating both socially and economically, so it’s a concern than only just over a third of providers are what we consider to be highly-resilient,” said Dirk Schrader, cyber resilience architect at Greenbone Networks.
“Being cyber resilient involves much more than having enough IT security budget or deploying the right technologies. We hope that – by highlight the key characteristics of highly-resilient organizations – this research will provide a blueprint for others.”
Organizations that put data at the center of their vision and strategy realize a differentiated competitive advantage by mitigating cost and risk, growing revenue and improving the customer experience, a Collibra survey of more than 900 global business analysts reveals.
Orgs rarely use data to guide business decisions
Despite a majority of companies saying they valued using data to drive decisions, many organizations are not consistently executing. While 84% of respondents said that it is very important or critical to put data at the center of their crucial business decisions and strategy, 43% admitted that their organizations fail to always or even routinely use data to guide business decisions.
Without a data management strategy, analysts often spend time on tasks that take away from their ability to perform analysis and provide value. This is a resounding issue for less data-mature organizations, which are 55% less likely to say their data management strategies positively contribute to optimal business decisions.
Data management strategy improves customer trust
Those insights-driven decisions are also yielding more successful outcomes, giving data intelligent organizations a competitive edge in achieving their key business objectives.
These organizations, which have the ability to connect the right data, insights and algorithms so people can drive business value, realized an 8% advantage in improving customer trust, an 81% advantage in growing revenue, and a 173% advantage in better complying with regulations and requirements.
Those organizations adopting data intelligence were also 58% more likely to exceed their revenue goals than non-data intelligent organizations.
“To lead with data, companies need to advance how they discover, organize, collaborate with, and execute on the data they have,” said Felix Van de Maele, co-founder and CEO of Collibra.
“Companies also must optimize how data analysts spend their time and automate rote tasks with data management technology. By freeing analysts up to spend more time on value-added tasks, organizations can decrease time to insight and accelerate trusted business outcomes.”
With the economic impact of COVID-19 increasingly looking like an imminent recession and the way we do work altered perhaps forever, CIOs and CISOs will most likely be managing reduced budgets and a vastly different threat landscape. With the average cost of a breach continuing to skyrocket, the already slim margin for error will shrink even further.
Automation can both mitigate inherent risks incurred from rapid ecosystem shifts as well as help IT teams re-evaluate long term spending once operations return to normalcy. By leveraging automated security tools, organizations can develop a dynamic understanding of the assets in their network, the risks most likely to be exploited, and the potential impact to the enterprise. The result is an always up-to-date, prioritized view of the most impactful moves an infosec team can make at any given time to minimize the likelihood of a breach.
The race to meet new threats
The rapid transition to remote work pushed a greater share of digital infrastructure onto new applications, as yet unproven in the enterprise, and distributed risk across potentially insecure employee home networks. 60% of IT teams say that COVID-19 has already impacted their role, a number almost certain to grow as the crisis evolves. Already stretched and under resourced, infosec teams must scramble to secure now widely used cloud, remote access software and collaboration tools.
That impact is just the beginning as organizations need to balance flexible infrastructure with security. Cloud security continues to be a major concern for the enterprise, with 4 in 5 users say they have encountered major security concerns. Some threats, like insecure devices on employee home networks, the same networks managed devices are now connected to, are largely outside the control of infosec teams.
Since malware is 3.75 times more likely to be found on corporate-associated home networks than corporate networks, employees connecting corporate devices to these networks introduces thousands of new endpoints to the threat landscape. With tens to hundreds of millions of security relevant signals to monitor on an ongoing basis, security is no longer a human scale problem. Without automation, infosec teams must prioritize based on guesswork and gut instinct.
Automation to the rescue
Malicious actors ranging from lone wolves to state-affiliated groups have been found to be taking advantage of the COVID-19 crisis at all levels, from phishing emails targeted at stressed employees to credential stuffing attacks aimed at popular enterprise applications.
With risk growing at such an exponential rate, automated management tools can help security teams streamline operations and better manage vulnerabilities. Successful infosec strategies start with asset inventory, an accurate, up-to-date inventory of the hardware and software assets connected to the enterprise network.
The focus needs to be on leveraging tools that keep a continuous, real-time inventory, not only categorizing each asset, but calculating business criticality as well. Since risk is a factor of the likelihood and impact of a breach, understanding business criticality is necessary when calculating impact.
Automated tools can track and inventory vulnerabilities across the entire enterprise attack surface, ranging from a user sharing the same password between work and personal applications to an outdated software version that is missing a critical patch.
The next step is prioritization: a security team lacking human capital is unlikely to have the time or resources to evaluate all vulnerabilities for potential impact and ability to be exploited. Automated risk management tools can streamline the process by analyzing both the immediacy of the vulnerability as well as the impact it would have. For example, if the password re-using employee has a high level of access across the corporate network, an automated security visibility tool could flag that as a higher priority need than the software missing a patch.
Automated prioritization ensures that infosec teams can maximize their resources and focus on vulnerabilities that pose the highest risk at any particular time, a key feature when the risk landscape is rapidly evolving.
Maximize security budgets: Ready for the long haul
Security teams are at the forefront of managing the impact of the current macroeconomic and societal reality. Combining smaller budgets with the need to deploy new devices and on-board new software tools means it is harder than ever to deal with escalating threats from hackers seeking to exploit the expanding digital enterprise. With automation, IT teams can effectively triage existing vulnerabilities and build a solid foundation for long-term security.