Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.
What are email impersonation attacks?
Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.
Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.
Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.
Tip #1 – Look out for social engineering cues
Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.
Here are some common phrases and situations you should look out for in impersonation emails:
- Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
- Unusual purchase requests (e.g., iTunes gift cards).
- Employees requesting sudden changes to direct deposit information.
- Vendor sharing new.
This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.
Tip #2 – Always do a context check on emails
Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.
- Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
- Why would Netflix emails come to your business email address?
- Why would the IRS ask for your SSN and other sensitive personal information over email?
To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.
Tip #3 – Check for email address and sender name deviations
To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:
- Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
- Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
- Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
- Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
- Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).
Tip #4 – Learn the “greatest hits” of impersonation phrases
Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:
- “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
- “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
- “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.
Tip #5 – Use secondary channels of authentication
Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.
Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:
- Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
- Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
- Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.
Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.
These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.
With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.
A failing cybersecurity market is contributing to ineffective performance of cybersecurity technology, a Debate Security research reveals.
Based on over 100 comprehensive interviews with business and cybersecurity leaders from large enterprises, together with vendors, assessment organizations, government agencies, industry associations and regulators, the research shines a light on why technology vendors are not incentivized to deliver products that are more effective at reducing cyber risk.
The report supports the view that efficacy problems in the cybersecurity market are primarily due to economic issues, not technological ones. The research addresses three key themes and ultimately arrives at a consensus for how to approach a new model.
Cybersecurity technology is not as effective as it should be
90% of participants reported that cybersecurity technology is not as effective as it should be when it comes to protecting organizations from cyber risk. Trust in technology to deliver on its promises is low, and yet when asked how organizations evaluate cybersecurity technology efficacy and performance, there was not a single common definition.
Pressure has been placed on improving people and process related issues, but ineffective technology has become accepted as normal – and shamefully – inevitable.
The underlying problem is one of economics, not technology
92% of participants reported that there is a breakdown in the market relationship between buyers and vendors, with many seeing deep-seated information asymmetries.
Outside government, few buyers today use detailed, independent cybersecurity efficacy assessment as part of their cybersecurity procurement process, and not even the largest organizations reported having the resources to conduct all the assessments themselves.
As a result, vendors are incentivized to focus on other product features, and on marketing, deprioritizing cybersecurity technology efficacy – one of several classic signs of a “market for lemons”.
Coordinated action between stakeholders only achieved through regulation
Unless buyers demand greater efficacy, regulation may be the only way to address the issue. Overcoming first-mover disadvantages will be critical to fixing the broken cybersecurity technology market.
Many research participants believe that coordinated action between all stakeholders can only be achieved through regulation – though some hold out hope that coordination could be achieved through sectoral associations.
In either case, 70% of respondents feel that independent, transparent assessment of technology would help solve the market breakdown. Setting standards on technology assessment rather than on technology itself could prevent stifling innovation.
Defining cybersecurity technology efficacy
Participants in this research broadly agree that four characteristics are required to comprehensively define cybersecurity technology efficacy.
To be effective, cybersecurity solutions need to have the capability to deliver the stated security mission (be fit-for-purpose), have the practicality that enterprises need to implement, integrate, operate and maintain them (be fit-for-use), have the quality in design and build to avoid vulnerabilities and negative impact, and the provenance in the vendor company, its people and supply chain such that these do not introduce additional security risk.
“In cybersecurity right now, trust doesn’t always sell, and good security doesn’t always sell and isn’t always easy to buy. That’s a real problem,” said Ciaran Martin, advisory board member, Garrison Technology.
“Why we’re in this position is a bit of a mystery. This report helps us understand it. Fixing the problem is harder. But our species has fixed harder problems and we badly need the debate this report calls for, and industry-led action to follow it up.”
“Company boards are well aware that cybersecurity poses potentially existential risk, but are generally not well equipped to provide oversight on matters of technical detail,” said John Cryan, Chairman Man Group.
“Boards are much better equipped when it comes to the issues of incentives and market dynamics revealed by this research. Even if government regulation proves inevitable, I would encourage business leaders to consider these findings and to determine how, as buyers, corporates can best ensure that cybersecurity solutions offered by the market are fit for purpose.”
“As a technologist and developer of cybersecurity products, I really feel for cybersecurity professionals who are faced with significant challenges when trying to select effective technologies,” said Henry Harrison, CSO of Garrison Technology.
“We see two noticeable differences when selling to our two classes of prospects. For security-sensitive government customers, technology efficacy assessment is central to buying behavior – but we rarely see anything similar when dealing with even the most security-sensitive commercial customers. We take from this study that in many cases this has less to do with differing risk appetites and more to do with structural market issues.”
Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.
Moving to the cloud and staying secure
Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.
When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.
Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.
Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.
Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.
Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.
When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.
It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.
It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.
Key influencers of a customer’s trust in their cloud provider include:
- Data location
- Investigation status and location of data
- Data segregation (keeping cloud customers’ data separated from others)
- Privileged access
- Backup and recovery
- Regulatory compliance
- Long-term viability
A TaaS example: Google Cloud
Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:
- Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
- Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment
Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:
- Limited data center access from a physical standpoint, adhering to strict access controls
- Disclosing how and why customer data is accessed
- Incorporating a process of access approvals
Multi-layered security for a trusted infrastructure
Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.
Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.
Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.
Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.
Zerologon: Unexpected squall
Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.
In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.
Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.
Beware of sharks
Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.
Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.
This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.
My hero zero (standing privilege)
Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.
An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.
Peering into the depths
To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.
The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.
AI & UEBA to the rescue
Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.
Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.
UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.
Your compass points to ZTA
Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).
Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.
CISOs are conflicted about how their companies can best reposition themselves to address the sudden and rapid shift to remote work caused by the pandemic, a Hysolate research reveals.
The story emerging from the data in the study is clear:
- COVID-19 has accelerated the arrival of the remote-first era.
- Legacy remote access solutions such as virtual desktop infrastructure (VDI), desktop-as-a-service (DaaS), and virtual private networks (VPN), among others, leave much to be desired in the eyes of CISOs and are not well suited to handle many of the new demands of the remote-first era.
- Half of CISOs believe that security measures are impacting productivity when scaling remote-first policies.
- Bring-your-own-PC (BYOPC) policies further complicate organizations’ approaches to secure remote access.
Remote work becoming a permanent workflow
Beyond the overwhelming consensus that work-from-home is here to stay (87 percent of respondents believe remote work has become a permanent workflow in their companies’ operations), the study reveals that there is no singular best practice or market-leading approach to enabling workers in the remote-first era.
There is no prevailing solution in place to provide secure remote access to corporate assets:
- 24 percent of survey respondents utilize VPN, and more than half of these also employ split tunneling, a practice that allows users to access dissimilar security domains at the same time, to reduce the organization’s VPN loads and traffic backhauling. However, of those that use split tunneling, two-thirds of CISOs express concerns about the security of the split tunneling approach.
- 36 percent deploy VDI or DaaS. However, of those CISOs that utilize VDI or DaaS, only 18 percent say their employees are happy with their company’s VDI or DaaS solution. Further, dissatisfaction with these legacy remote access solutions isn’t limited to user experience; more than three-quarters of CISOs feel that their return on investment in VDI or DaaS has been medium to low.
Remote security policies issues
CISOs are also grappling with what their remote security policies should be in the new remote-first era:
- 26 percent of CISOs surveyed have introduced more stringent endpoint security and corporate access measures since the arrival of the pandemic.
- 35 percent have relaxed their security policies in order to foster greater productivity among remote workers.
- 39 percent have left their security policies the same.
More than 60 percent of companies felt that they weren’t ready for the changes that the proliferation of the pandemic forced. What is uncertain is whether the other 39 percent who have made no changes are standing pat because they are comfortable with their company’s security posture or because they don’t know what changes to make.
CISOs scramble to enable remote work and maintain security
“But when we surveyed CISOs who were scrambling to scale their remote workforce IT operations in light of the pandemic, it became clear how important worker productivity has now become and that legacy solutions like VPN, VDI and DaaS just can’t handle the demands of the new remote-first reality.”
Web browsing restrictions and BYOPC policies further muddy the remote-first waters. Sixty-two percent of CISOs said their companies restrict access to certain websites on corporate devices, while 22 percent say their companies do not allow access to corporate networks or applications from a non-corporate device.
The confusion indicated by the mixed results of the survey report is enough to cause many CISOs a sleepless night. In fact, the varied response trend carried over to the one unconventional question asked in the study regarding pandemic indulgences: 20 percent of CISOs report drinking more wine during the COVID-19 crisis; 32 percent drink more coffee; 8 percent choose whiskey; and, perhaps in what should come as a surprise to no one, 40 percent chose “All of the Above.”
We are beginning to shift away from what has long been our first and last line of defense: the password. It’s an exciting time. Since the beginning, passwords have aggravated people. Meanwhile, passwords have become the de facto first step in most attacks. Yet I can’t help but think, what will the consequences of our actions be?
Intended and unintended consequences
Back when overhead cameras came to the express toll routes in Ontario, Canada, it wasn’t long before the SQL injection to drop tables made its way onto bumper stickers. More recently in California, researcher Joe Tartaro purchased a license plate that said NULL. With the bumper stickers, the story goes, everyone sharing the road would get a few hours of toll-free driving. But with the NULL license plate? Tartaro ended up on the hook for every traffic ticket with no plate specified, to the tune of thousands of dollars.
One organization I advised recently completed an initiative to reduce the number of agents on the endpoint. In a year when many are extending the lifespan and performance of endpoints while eliminating location-dependent security controls, this shift makes strategic sense.
Another CISO I spoke with recently consolidated multi-factor authenticators onto a single platform. Standardizing the user experience and reducing costs is always a pragmatic move. Yet these moves limited future moves. In both cases, any initiative by the security team which changed authenticators or added agents ended up stuck in park, waiting for a greenlight.
Be careful not to limit future moves
To make moves that open up possibilities, security teams think along two lines: usability and defensibility. That is, how will the change impact the workforce, near term and long term? On the opposite angle, how will the change affect criminal behavior, near term and long term?
Whether decreasing the number of passwords required through single sign-on (SSO) or eliminating the password altogether in favor of a strong authentication factor (passwordless), the priority is on the workforce experience. The number one reason for tackling the password problem given by security leaders is improving the user experience. It is a rare security control that makes people’s lives easier and leadership wants to take full advantage.
There are two considerations when planning for usability. The first is ensuring the tactic addresses the common friction points. For example, with passwordless, does the approach provide access to devices and applications people work with? Is it more convenient and faster what they do today? The second consideration is evaluating what the tactic allows the security team to do next. Does the approach to passwordless or SSO block a future initiative due to lock-in? Or will the change enable us to take future steps to secure authentication?
The one thing we know for certain is, whatever steps we take, criminals will take steps to get around us. In the sixty years since the first password leak, we’ve done everything we can, using both machine and man. We’ve encrypted passwords. We’ve hashed them. We increased key length and algorithm strength. At the same time, we’ve asked users to create longer passwords, more complex passwords, unique passwords. We’ve provided security awareness training. None of these steps were taken in a vacuum. Criminals cracked files, created rainbow tables, brute-forced and phished credentials. Sixty years of experience suggests the advancement we make will be met with an advanced attack.
We must increase the trust in authentication while increasing usability, and we must take steps that open up future options. Security teams can increase trust by pairing user authentication with device authentication. Now the adversary must both compromise the authentication and gain access to the device.
To reduce the likelihood of device compromise, set policies to prevent unpatched, insecure, infected, or compromised devices from authenticating. The likelihood can be even further reduced by capturing telemetry, modeling activity, and comparing activity to the user’s baseline. Now the adversary must compromise authentication, gain access to the endpoint device, avoid endpoint detection, and avoid behavior analytics.
Technology is full of unintended consequences. Some lead to tollfree drives and others lead to unexpected fees. Some open new opportunities, others new vulnerabilities. Today, many are moving to improve user experience by reducing or removing passwords. The consequences won’t be known immediately. We must ensure our approach meets the use cases the workforce cares about while positioning us to address longer-term goals and challenges.
Additionally, we must get ahead of adversaries and criminals. With device trust and behavior analytics, we must increase trust in passwordless authentication. We can’t predict what is to come, but these are steps security teams can take today to better position and protect our organizations.
What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.
How does confidential computing enhance the overall security of a complex enterprise architecture?
We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.
Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.
At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.
Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?
Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.
Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.
All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.
What type of investments does confidential computing require? What technologies and techniques are involved?
To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.
To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.
Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.
How is confidential computing helping large organizations with a massive work-from-home movement?
As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.
With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.
How do you see the work of the Confidential Computing Consortium evolving in the near future?
Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.
Vulnerability scanners can be a very useful addition to any development or operations process. Since a typical vulnerability scanner needs to detect vulnerabilities in deployed software, they are (generally) not dependent on the language or technology used for the application they are scanning.
This often doesn’t make them the top choice for detecting a large number of vulnerabilities or even detecting fickle bugs or business logic issues, but makes them great and very common tools for testing a large number of diverse applications, where such dynamic application security testing tools are indispensable. This includes testing for security defects in software that is being currently developed as a part of a SDLC process, reviewing third-party applications that are deployed inside one’s network (as a part of a due diligence process) or – most commonly – finding issues in all kinds of internally developed applications.
We reviewed Netsparker Enterprise, which is one of the industry’s top choices for web application vulnerability scanning.
Netsparker Enterprise is primarily a cloud-based solution, which means it will focus on applications that are publicly available on the open internet, but it can also scan in-perimeter or isolated applications with the help of an agent, which is usually deployed in a pre-packaged Docker container or a Windows or Linux binary.
To test this product, we wanted to know how Netsparker handles a few things:
1. Scanning workflow
2. Scan customization options
3. Detection accuracy and results
4. CI/CD and issue tracking integrations
5. API and integration capabilities
6. Reporting and remediation efforts.
To assess the tool’s detection capabilities, we needed a few targets to scan and assess.
After some thought, we decided on the following targets:
1. DVWA – Damn Vulnerable Web Application – An old-school extremely vulnerable application, written in PHP. The vulnerabilities in this application should be detected without an issue.
3. Vulnapi – A python3-based vulnerable REST API, written in the FastAPI framework running on Starlette ASGI, featuring a number of API based vulnerabilities.
After logging in to Netsparker, you are greeted with a tutorial and a “hand-holding” wizard that helps you set everything up. If you worked with a vulnerability scanner before, you might know what to do, but this feature is useful for people that don’t have that experience, e.g., software or DevOps engineers, who should definitely use such tools in their development processes.
Initial setup wizard
Scanning targets can be added manually or through a discovery feature that will try to find them by matching the domain from your email, websites, reverse IP lookups and other methods. This is a useful feature if other methods of asset management are not used in your organization and you can’t find your assets.
New websites or assets for scanning can be added directly or imported via a CSV or a TXT file. Sites can be organized in Groups, which helps with internal organization or per project / per department organization.
Adding websites for scanning
Scans can be defined per group or per specific host. Scans can be either defined as one-off scans or be regularly scheduled to facilitate the continuous vulnerability remediation process.
To better guide the scanning process, the classic scan scope features are supported. For example, you can define specific URLs as “out-of-scope” either by supplying a full path or a regex pattern – a useful option if you want to skip specific URLs (e.g., logout, user delete functions). Specific HTTP methods can also be marked as out-of-scope, which is useful if you are testing an API and want to skip DELETE methods on endpoints or objects.
Initial scan configuration
Scan scope options
One feature we quite liked is the support for uploading the “sitemap” or specific request information into Netsparker before scanning. This feature can be used to import a Postman collection or an OpenAPI file to facilitate scanning and improve detection capabilities for complex applications or APIs. Other formats such as CSV, JSON, WADL, WSDL and others are also supported.
For the red team, loading links and information from Fiddler, Burp or ZAP session files is supported, which is useful if you want to expand your automated scanning toolbox. One limitation we encountered is the inability to point to an URL containing an OpenAPI definition – a capability that would be extremely useful for automated and scheduled scanning workflows for APIs that have Swagger web UIs.
Scan policies can be customized and tuned in a variety of ways, from the languages that are used in the application (ASP/ASP.NET, PHP, Ruby, Java, Perl, Python, Node.js and Other), to database servers (Microsoft SQL server, MySQL, Oracle, PostgreSQL, Microsoft Access and Others), to the standard choice of Windows or Linux based OSes. Scan optimizations should improve the detection capability of the tool, shorten scanning times, and give us a glimpse where the tool should perform best.
The next important question is, does it blend… or integrate? From an integration standpoint, sending email and SMSes about the scan events is standard, but support for various issue tracking systems like Jira, Bitbucket, Gitlab, Pagerduty, TFS is available, and so is support for Slack and CI/CD integration. For everything else, there is a raw API that can be used to tie in Netsparker to other solutions if you are willing to write a bit of integration scripting.
One really well-implemented feature is the support for logging into the testing application, as the inability to hold a session and scan from an authenticated context in the application can lead to a bad scanning performance.
Netsparker has the support for classic form-based login, but 2FA-based login flows that require TOTP or HOTP are also supported. This is a great feature, as you can add the OTP seed and define the period in Netsparker, and you are all set to scan OTP protected logins. No more shimming and adding code to bypass the 2FA method in order to scan the application.
Custom scripting workflow for authentication
If we had to nitpick, we might point out that it would be great if Netsparker also supported U2F / FIDO2 implementations (by software emulating the CTAP1 / CTAP2 protocol), since that would cover the most secure 2FA implementations.
In addition to form-based authentication, Basic NTLM/Kerberos, Header based (for JWTs), Client Certificate and OAuth2-based authentication is also supported, which makes it easy to authenticate to almost any enterprise application. The login / logout flow is also verified and supported through a custom dialog, where you can verify that the supplied credentials work, and you can configure how to retain the session.
Login verification helper
And now for the core of this review: what Netsparker did and did not detect.
In short, everything from DVWA was detected, except broken client-side security, which by definition is almost impossible to detect with security scanning if custom rules aren’t written. So, from a “classic” application point of view, the coverage is excellent, even the out-of-date software versions were flagged correctly. Therefore, for normal, classic stateful applications, written in a relatively new language, it works great.
One interesting point for vulnerability detection is that Netsparker uses an engine that tries to verify if the vulnerability is exploitable and will try to create a “proof” of vulnerability, which reduces false positives.
On the negative side, no vulnerabilities in WebSocket-based communications were found, and neither was the API endpoint that implemented insecure YAML deserialization with pyYAML. By reviewing the Netsparker knowledge base, we also found that there is no support for websockets and deserialization vulnerabilities.
That’s certainly not a dealbreaker, but something that needs to be taken into account. This also reinforces the need to use a SAST-based scanner (even if just a free, open source one) in the application security scanning stack, to improve test coverage in addition to other, manual based security review processes.
Multiple levels of detail (from extensive, executive summary, to PCI-DSS level) are supported, both in a PDF or HTML export option. One nice feature we found is the ability to create F5 and ModSecurity rules for virtual patching. Also, scanned and crawled URLs can be exported from the reporting section, so it’s easy to review if your scanner hit any specific endpoints.
Scan results dashboard
Scan result details
Instead of describing the reports, we decided to export a few and attach them to this review for your enjoyment and assessment. All of them have been submitted to VirusTotal for our more cautious readers.
Netsparker’s reporting capabilities satisfy our requirements: the reports contain everything a security or AppSec engineer or a developer needs.
Since Netsparker integrates with JIRA and other ticketing systems, the general vulnerability management workflow for most teams will be supported. For lone security teams, or where modern workflows aren’t integrated, Netsparker also has an internal issue tracking system that will let the user track the status of each found issue and run rescans against specific findings to see if mitigations were properly implemented. So even if you don’t have other methods of triage or processes set up as part of a SDLC, you can manage everything through Netsparker.
Netsparker is extremely easy to set up and use. The wide variety of integrations allow it to be integrated into any number of workflows or management scenarios, and the integrated features and reporting capabilities have everything you would want from a standalone tool. As far as features are concerned, we have no objections.
The login flow – the simple interface, the 2FA support all the way to the scripting interface that makes it easy to authenticate even in the more complex environments, and the option to report on the scanned and crawled endpoints – helps users discover their scanning coverage.
Taking into account the fact that this is an automated scanner that relies on “black boxing” a deployed application without any instrumentalization on the deployed environment or source code scanning, we think it is very accurate, though it could be improved (e.g., by adding the capability of detecting deserialization vulnerabilities). Following the review, Netsparker has confirmed that adding the capability of detecting deserialization vulnerabilities is included in the product development plans.
Nevertheless, we can highly recommend Netsparker.
Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam.
Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% in 2019 to 21% this year.
“The concern for automation among younger professionals in cybersecurity was surprising to us. In trying to understand this sentiment, we could partially attribute it to lack of on-the-job training using automation technology,” said Samantha Humphries, security strategist at Exabeam.
“As we noted earlier this year in our State of the SOC research, ambiguity around career path or lack of understanding about automation can have an impact on job security. It’s also possible that this is a symptom of the current economic climate or a general lack of experience navigating the workforce during a global recession.”
AI and ML: A threat to job security?
Of respondents under the age of 45, 53% agreed or strongly agreed that AI and ML are a threat to their job security. This is contrasted with just 25% of respondents 45 and over who feel the same, possibly indicating that subsets of security professionals in particular prefer to write rules and manually investigate.
Interestingly, when asked directly about automation software, 89% of respondents under 45 years old believed it would improve their jobs, yet 47% are still threatened by its use. This is again in contrast with the 45 and over demographic, where 80% believed automation would simplify their work, and only 22% felt threatened by its use.
Examining the sentiments around automation by region, 47% of US respondents were concerned about job security when automation software is in use, as well as SG (54%), DE (42%), AUS (40%) and UK (33%).
In the survey, which drew insights from professionals throughout the US, the UK, AUS, Canada, India and the Netherlands, only 10% overall believed that AI and automation were a threat to their jobs.
On the flip side, there were noticeable increases in job approval across the board, with an upward trend in satisfaction around role and responsibilities (96%), salary (87%) and work/life balance (77%).
Diversity showing positive signs of improvement
When asked what else they enjoyed about their jobs, respondents listed working in an environment with professional growth (15%) as well as opportunities to challenge oneself (21%) as top motivators.
53% reported jobs that are either stressful or very stressful, which is down from last year (62%). Interestingly, despite being among those that are generally threatened by automation software, 100% of respondents aged 18-24 reported feeling secure in their roles and were happiest with their salaries (93%).
Though the number of female respondents increased this year, it remains to be seen whether this will emerge as a trend. This year’s male respondents (78%) are down 13% from last year (91%).
In 2019, nearly 41% were in the profession for at least 10 years or more. This year, a larger percentage (83%) have 10 years or less, and 34% have been in the cybersecurity industry for five years or less. Additionally, one-third do not have formal cybersecurity degrees.
“There is evidence that automation and AI/ML are being embraced, but this year’s survey exposed fascinating generational differences when it comes to professional openness and using all available tools to do their jobs,” said Phil Routley, senior product marketing manager, APJ, Exabeam.
“And while gender diversity is showing positive signs of improvement, it’s clear we still have a very long way to go in breaking down barriers for female professionals in the security industry.”
Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.
New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.
But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.
The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.
Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.
Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.
Sub sectors vary
Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.
Cloud deployment impacts
As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.
It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.
The healthcare supply chain is at risk
It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.
When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.
“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.
“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”
COVID-19 has forced developer agility into overdrive, as the tech industry’s quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.
During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.
Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.
Here are some best practices developers should follow when implementing open source code to promote security:
Know what and where open source code is in use
First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.
As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.
Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.
Understand the risks before adopting open source
Aside from tracking vulnerabilities in the code that’s already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.
Teams should carefully consider the level of support offered for a given component. It’s important to get satisfactory answers to questions such as:
- How often is the component patched?
- Are the patches of high quality and do they address the most pressing security issues when released?
- Once implemented, are they communicated effectively and efficiently to the user base?
- Is the group or individual who built the component a trustworthy source?
Leverage automation to mitigate risk
It’s no secret that COVID-19 has altered developers’ working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to “do more with less” has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.
While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organization’s entire software suite is hard work that only increases in difficulty with the scale of an organization’s software deployments. And what was inefficient in normal times has become unfeasible in the current context.
Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.
Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.
Confidence in a secure future
When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.
While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.
As the Information Age slowly gives way to the Fourth Industrial Revolution, and the rise of IoT and IIoT, on-demand availability of computer system resources, big data and analytics, and cyber attacks aimed at business environments impact on our everyday lives, there’s an increasing need for knowledgeable cybersecurity professionals and, unfortunately, an increasing cybersecurity workforce skills gap.
The cybersecurity skills gap is huge
A year ago, (ISC)² estimated that the global cybersecurity workforce numbered 2.8 million professionals, when there’s an actual need for 4.07 million.
According to a recent global study of cybersecurity professionals by the Information Systems Security Association (ISSA) and analyst firm Enterprise Strategy Group (ESG), there has been no significant progress towards a solution to this problem in the last four years.
“What’s needed is a holistic approach of continuous cybersecurity education, where each stakeholder needs to play a role versus operating in silos,” ISSA and ESG stated.
Those starting their career in cybersecurity need many years to develop real cybersecurity proficiency, the respondents agreed. They need cybersecurity certifications and hands-on experience (i.e., jobs) and, ideally, a career plan and guidance.
Continuous cybersecurity training and education are key
Aside from the core cybersecurity talent pool, new job recruits are new graduates from universities, consultants/contractors, employees at other departments within an organization, security/hardware vendors and career changers.
One thing they all have in common is the need for constant additional training, as technology advances and changes and attackers evolve their tactics, techniques and procedures.
Though most IT and security professionals use their own free time to improve their cyber skills, they must learn on the job and get effective support from their employers for their continued career development.
Times are tough – there’s no doubt of that – but organizations must continue to invest in their employee’s career and skills development if they want to retain their current cybersecurity talent, develop it, and attract new, capable employees.
“The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles,” noted Deshini Newman, Managing Director EMEA, (ISC)².
Certifications show employers that cybersecurity professionals have the knowledge and skills required for the job, but also indicate that they are invested in keeping pace with a myriad of evolving issues.
“Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities,” she pointed out.
With both security budgets and talent pools negatively affected by the ongoing pandemic, state and local governments are struggling to cope with the constant wave of cyber threats more than ever before, a Deloitte study reveals.
The study is based on responses from 51 U.S. state and territory enterprise-level CISOs.
- COVID-19 has challenged continuity and amplified gaps in budget, talent and threats, and the need for partnerships.
- Collaboration with local governments and public higher education is critical to managing increasingly complex cyber risk within state borders.
- CISOs need a centralized structure to position cyber in a way that improves agility, effectiveness and efficiencies.
The report also details focus areas for states during the COVID-19 pandemic. While the pandemic has highlighted the resilience of public sector cyber leaders, it has also called attention to long-standing challenges facing state IT and cybersecurity organizations such as securing adequate budgets and talent, and coordinating consistent security implementation across agencies.
Remote work creating new opportunities for cyber threats
These challenges were exacerbated by the abrupt shift to remote work spurred by the pandemic. According to the study:
- Before the pandemic, 52% of respondents said less than 5% of staff worked remotely.
- During the pandemic, 35 states have had more than half of employees working remotely; nine states have had more than 90% remote workers.
“The last six months have created new opportunities for cyber threats and amplified existing cybersecurity challenges for state governments,” said Meredith Ward, director of policy and research at NASCIO.
“The budget and talent challenges experienced in recent years have only grown, and CISOs are now also faced with an acceleration of strategic initiatives to address threats associated with the pandemic.”
“However, continuing challenges with resources beset state CISOs/CIOs. This is evident when comparing the much higher levels of budget that federal agencies and other industries like financial services receive to fight cyber threats.”
The need for digital modernization amplified by the pandemic
State governments’ longstanding need for digital modernization has only been amplified by the pandemic, along with the essential role that cybersecurity needs to play in the discussion. Key takeaways from the 2020 study include:
- Fewer than 40% of states reported having a dedicated budget line item for cybersecurity.
- Half of states still allocate less than 3% of their total information technology budget on cybersecurity.
- CISOs identified financial fraud as three times greater of a threat as they did in 2018.
- Overall, respondents said they believe the probability of a security breach is higher in the next 12 months, compared to responses to the same question in the 2018 study.
- Only 27% of states provide cybersecurity training to local governments and public education entities.
- Only 28% of states reported that they had collaborated extensively with local governments as part of their state’s security program during the past year, with 65% reporting limited collaboration.
SIEM and SOAR solutions are important tools in a cybersecurity stack. They gather a wealth of data about potential security incidents throughout your system and store that info for review. But just like nerve endings in the body sending signals, what good are these signals if there is no brain to process, categorize and correlate this information?
A vendor-agnostic XDR (Extended Detection and Response) solution is a necessary component for solving the data overload problem – a “brain” that examines all of the past and present data collected and assigns a collective meaning to the disparate pieces. Without this added layer, organizations are unable to take full advantage of their SIEM and SOAR solutions.
So, how do organizations implement XDR? Read on.
SIEM and SOAR act like nerves
It’s easy for solutions with acronyms to cause confusion. SOAR and SIEM are perfect examples, as they are two very different technologies that often get lumped together. They aren’t the same thing, and they do bring complementary capabilities to the security operations center, but they still don’t completely close the automation gap.
The SIEM is a decades-old solution that uses technology from that era to solve specific problems. At their core, SIEMs are data collection, workflow and rules engines that enable users to sift through alerts and group things together for investigation.
In the last several years, SOAR has been the favorite within the security industry’s marketing landscape. Just as the SIEM runs on rules, the SOAR runs on playbooks. These playbooks let an analyst automate steps in the event detection, enrichment, investigation and remediation process. And just like with SIEM rules, someone has to write and update them.
Because many organizations already have a SIEM, it seemed reasonable for the SOAR providers to start with automating the output from the SIEM tool or security platform console. So: Security controls send alerts to a SIEM > the SIEM uses rules written by the security team to filter down the number of alerts to a much smaller number, usually 1,000,000:1 > SIEM events are sent to the SOAR, where playbooks written by the security team use workflow automation to investigate and respond to the alerts.
SOAR investigation playbooks attempt to contextualize the events with additional data – often the same data that the SIEM has filtered out. Writing these investigation playbooks can occupy your security team for months, and even then, they only cover a few scenarios and automate simple tasks like virus total lookups.
The verdict is that SOARs and SIEMs purport to perform all the actions necessary to automate the screening of alerts, but the technology in itself cannot do this. It requires trained staff to bring forth this capability by writing rules and playbooks.
Coming back to the analogy, this data can be compared to the nerves flowing through the human body. They fire off alerts that something has happened – alerts that mean nothing without a processing system that can gather context and explain what has happened.
Giving the nerves a brain
What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data.
This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event.
This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.
How to apply an XDR brain
To get started with integrating XDR into your current system, take these three steps:
1. Deploy a solution that is vendor-agnostic and works out of the box. This XDR layer of security doesn’t need playbooks or rules. It changes the foundation of your security program and how your staff do their work. This reduces your commitment in time and budget for security engineering, or at least enables you to redirect it.
2. It has become much easier in the last several years to collect, store and – to some extent – analyze data. In particular, cloud architectures offer simple and cost-effective options for collecting and storing vast quantities of data. For this reason, it’s now possible to turn your sensors all the way up rather than letting in just a small stream of data.
3. Decide which risk reduction projects are critical for the team. Automation should release security professionals from mundane tasks so they can focus on high-value actions that truly reduce risk, like incident response, hunting and tuning security controls. There may also be budget that is freed up for new technology or service purchases.
Reading the signals
To make the most of SOARs and SIEMs, you XDR – a tool that will take the data collected and add the context needed to turn thousands of alerts into one complete situation that is worth investigating.
The XDR layer is an addition to a company’s cybersecurity strategy that will most effectively use SIEM and SOAR, giving all those nerve signals a genius brain that can sort them out and provide the context needed in today’s cyber threat landscape.
According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.
As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.
Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.
All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.
In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.
A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.
In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.
Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.
Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.
Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.
Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.
Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.
Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.
Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.
“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.
Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.
In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.
[Answers have been edited for clarity.]
Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?
The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.
Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.
Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.
While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.
What are the most often exploited vulnerabilities in medical devices?
It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:
Unsecured firmware updating
I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:
- Exposure of the binary executable (i.e., it isn’t encrypted)
- Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
- A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).
Overlooking physical attacks
Physical attack can be mounted:
- Through an unsecured JTAG/SWD debugging port
- Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
- By sniffing internal busses, such as SPI and I2C
- Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)
Manufacturing support left enabled
Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.
No communication authentication
Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.
Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.
I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)
What things must medical device manufacturers keep in mind if they want to produce secure products?
There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.
Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.
To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.
They need to:
- Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
- Know how to securely transfer a device to production and securely manage it once in production
- Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
- Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.
They must avoid mistakes like:
- Not thinking that a medical device needs to be secured
- Assuming their development team ‘can’ and ‘is’ securing their product
- Not designing-in the ability to update the device in the field
- Assuming that all vulnerabilities can be mitigated by a field update
- Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.
Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.
What about the developers? Any advice on skills they should acquire or brush up on?
First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.
And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?
At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.
In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.
What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?
It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.
While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.
But there are some common concepts represented in all these regulations, such as:
- Risk management
- Software bill of materials (SBOM)
- “Total Product Lifecycle”
But if you plan on marketing in the US, the two most important document should be FDA’s:
- 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
- 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?
The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.
Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.
The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.
What initiatives exist to promote medical device cybersecurity?
Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.
And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.
What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?
So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.
Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.
Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.
And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.
HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.
I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.
One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!
Cyberattacks are becoming increasingly sophisticated as tools and services on the dark web – and even the surface web – enable low-skill threat actors to create highly evasive threats. Unfortunately, most of today’s modern malware evades traditional signature-based anti-malware services, arriving to endpoints with ease. As a result, organizations lacking a layered security approach often find themselves in a precarious situation. Furthermore, threat actors have also become extremely successful at phishing users out of their credentials or simply brute forcing credentials thanks to the widespread reuse of passwords.
A lot has changed across the cybersecurity threat landscape in the last decade, but one thing has remained the same: the endpoint is under siege. What has changed is how attackers compromise endpoints. Threat actors have learned to be more patient after gaining an initial foothold within a system (and essentially scope out their victim).
Take the massive Norsk Hydro ransomware attack as an example: The initial infection occurred three months prior to the attacker executing the ransomware and locking down much of the manufacturer’s computer systems. That was more than enough time for Norsk to detect the breach before the damage could done, but the reality is most organization simply don’t have a sophisticated layered security strategy in place.
In fact, the most recent IBM Cost of a Data Breach Report found it took organizations an average of 280 days to identify and contain a breach. That’s more than 9 months that an attacker could be sitting on your network planning their coup de grâce.
So, what exactly are attackers doing with that time? How do they make their way onto the endpoint undetected?
It usually starts with a phish. No matter what report you choose to reference, most point out that around 90% of cyberattacks start with a phish. There are several different outcomes associated with a successful phish, ranging from compromised credentials to a remote access trojan running on the computer. For credential phishes, threat actors have most recently been leveraging customizable subdomains of well-known cloud services to host legitimate-looking authentication forms.
The above screenshot is from a recent phish WatchGuard Threat Lab encountered. The link within the email was customized to the individual recipient, allowing the attacker to populate the victim’s email address into the fake form to increase credibility. The phish was even hosted on a Microsoft-owned domain, albeit on a subdomain (servicemanager00) under the attacker’s control, so you can see how an untrained user might fall for something like this.
That secondary payload is usually a remote-access trojan or botnet of some form that includes a suite of tools like keyloggers, shell script-injectors, and the ability to download additional modules. The infection isn’t usually limited to the single endpoint for long after this. Attackers can use their foothold to identify other targets on the victim’s network and rope them in as well.
It’s even easier if the attackers manage to get hold of a valid set of credentials and the organization hasn’t deployed multi-factor authentication. It allows the threat actor to essentially walk right in through the digital front door. They can then use the victim’s own services – like built-in Windows scripting engines and software deployment services – in a living-off-the-land attack to carry out malicious actions. We commonly see threat actors leverage PowerShell to deploy fileless malware in preparation to encrypt and/or exfiltrate critical data.
The WatchGuard Threat Lab recently identified an ongoing infection while onboarding a new customer. By the time we arrived, the threat actor had already been on the victim’s network for some time thanks to compromising at least one local account and one domain account with administrative permissions. Our team was not able to identify how exactly the threat actor obtained the credentials, or how long they had been present on the network, but as soon as our threat hunting services were turned on, indicators immediately lit up identifying the breach.
In this attack, the threat actors used a combination of Visual Basic Scripts and two popular PowerShell toolkits – PowerSploit and Cobalt Strike – to map out the victim’s network and launch malware. One behavior we saw came from Cobalt Strike’s shell code decoder enabled the threat actors to download malicious commands, load them into memory, and execute them directly from there, without the code ever touching the victim’s hard drive. These fileless malware attacks can range from difficult to impossible to detect with traditional endpoint anti-malware engines that rely on scanning files to identify threats.
Elsewhere on the network our team saw the threat actors using PsExec, a built in Windows tool, to launch a remote access trojan with SYSTEM-level privileges thanks to the compromised domain admin credentials. The team also identified the threat actors attempts to exfiltrate sensitive data to a DropBox account using a command-line based cloud storage management tool.
Fortunately, they were able to identify and clean up the malware quickly. However, without the victim changing the stolen credentials, the attacker could have likely re-initiated their attack at-will. Had the victim deployed an advanced Endpoint Detection and Response (EDR) engine as part of their layered security strategy, they could have stopped or slowed the damage created from those stolen credentials.
Attackers are targeting businesses indiscriminately, even small organizations. Relying on a single layer of protection simply no longer works to keep a business secure. No matter the size of an organization, it’s important to adopt a layered security approach that can detect and stop modern endpoint attacks. This means protections from the perimeter down to the endpoint, including user training in the middle. And, don’t forget about the role of multifactor authentication (MFA) – could be the difference between stopping an attack and becoming another breach statistic.
It’s a story I have seen play out many times over two decades in the Identity and Access Management (IAM) field: An organization determines that it needs a more robust Identity Governance and Administration (IGA) program, they kick off a project to realize this goal, but after a promising start, the whole effort falls apart within six to twelve months.
What an IGA program does
People become frustrated about wasted time and money. The audit and compliance teams who need IGA grow disappointed, perhaps even anxious. The regulatory risks they are trying to mitigate continue to loom large. Finger pointing ensues, arguing and discord follow.
Don’t get me wrong, a fine-tuned and efficient IGA program is well worth it. An IGA program helps ensure an organization’s data security, assist in completing audits, and support significant boosts in operational agility.
The three common IGA project mistakes
The specific things that can go wrong vary by company, but they follow a sadly familiar pattern. Three common mistakes stand out in particular:
1. Underestimating the costs
An IGA project is an IT project, but it’s so much more. Viewing IGA simply as a matter of buying and installing software is an avoidable error. To work, IGA usually needs advisory services on top of in-house resources. Application integration costs may get under-counted as well, as project stakeholders fail to grasp the interconnected nature of the IGA process. For example, the IGA solution usually has to link with HR management systems and so forth. Training costs can be higher than people predict. Finding people with IGA skills also tends to take longer and cost more than anyone might guess at the outset.
2. Not building for user experience (UX)
IGA end users need to feel comfortable and confident on the system, or the whole project finds itself in jeopardy. People want to get their jobs done. They generally don’t have the time or interest in learning a new system and lexicon. If using the solution isn’t a basically effortless part of their day-to-day work lives, users will seek ways around it. They’ll call the help desk or contact a colleague, claiming they cannot complete IGA tasks. This sort of slow-building mutiny can wreck an IGA program.
3. Failing to secure or sustain C-suite sponsorship
IGA projects can be challenging. They require collaboration across departments. Strong executive sponsorship is critical for success for overcoming potential points of friction. In my experience, one can predict that trouble is on the horizon as soon as the executive sponsor stops coming to status meetings. This usually isn’t the executive’s fault. He or she is simply quite busy and has not been properly briefed on the importance of his or her role in ensuring a good outcome for the investment in IGA.
How to avoid IGA project problems
These pitfalls need not sink an IGA program. Being conscious of the potential problems and addressing them in the project planning stage helps a great deal. Budgeting accurately, thinking through UX, and making expectations clear with executive sponsors provide the basis for IGA success.
There’s also a new approach in IGA implementation that can make a huge difference. It involves integrating the IGA toolset with the existing application platform, a system that everyone is already using for IT-related workloads. These platforms exist in most organizations, a popular example is ServiceNow.
Building IGA on top of an existing platform delivers a number of distinct advantages for the program:
- It maximizes the current investment in the platform
- It’s less expensive than purchasing an IGA solution that is its own stack—a savings that applies to both the build and manage phases of its life cycle
- No new skillsets are required, either, which avoids the costly recruit/train/retain struggles that can arise with standalone IGA solutions
- Changes to the IGA system are more economical as well when it runs atop a familiar incumbent platform in the organization.
Employees are already using the platform interfaces, so there are few training issues or UX problems inherent in launching an IGA program that is seamlessly integrated into existing processes. Knowledge workers know the interfaces and workflows to request and approve identity governance services. They won’t have to bookmark a new URL or learn a new way of doing things, speeding overall acceptance.
Application platforms are also increasingly becoming one of the main vehicles for digital transformation (DX) projects. This makes sense, given the importance of IT agility and smooth IT operations in the DX vision. By linking IGA with DX, it becomes easier to attract sustainable executive interest in the IGA program.
C-level executives sponsor DX projects, bonuses may hinge on them. They know DX projects are ambitious and potential generators of strong return on investment. With IGA built into DX, the identity governance program will not be neglected.
Avoiding the common pitfalls inherent in launching an IGA program will take some focus and work, but the resulting benefits are well worth the effort. As you look to refresh or improve your current IGA program, consider leveraging what platforms you already have in place to achieve the most successful outcome.
Despite ongoing economic uncertainty amidst a global pandemic, many dealmakers remain optimistic about the outlook for the year ahead as they increasingly pursue alternative merger and acquisition (M&A) methods to navigate the crisis and pursue new disruptive business growth strategies.
According to a Deloitte survey of 1,000 U.S. corporate M&A executives and private equity firm professionals, 61% of survey respondents expect U.S. M&A activity to return to pre-COVID-19 levels within the next 12 months.
Soon after the WHO declared COVID-19 a pandemic on March 11, deal activity in the U.S. plunged — most notably during April and May.
Responding M&A executives say they tentatively paused (92%) or abandoned (78%) at least one transaction as a result of the pandemic outbreak. However, since March 2020, possibly aiming to take advantage of pandemic-driven business disruptions, 60% say their organizations have been more focused on pursuing new deals.
“M&A executives have moved quickly to adapt and uncover value in new and innovative ways as systemic change driven by the pandemic has resulted in alternative approaches to transactions,” said Russell Thomson, partner, Deloitte & Touche LLP, and Deloitte’s U.S. merger and acquisition services practice leader.
“We expect both traditional and alternative M&A to be an important lever for dealmakers as businesses recover and thrive in a post-COVID economy.”
Alternative dealmaking on the rise
For many, alternative deals are quickly outpacing traditional M&A activity as the search for value intensifies in a low-growth environment.
When asked which type of deals their organizations are most interested in pursuing, responding corporate M&A executives’ top choice was alternatives to traditional M&A, including alliances, joint ventures, and Special Purpose Acquisition Companies (45%) — ranking higher than acquisitions (35%).
Private equity investors plan to remain more focused on traditional acquisitions (53%), while simultaneously pushing pursuit of M&A alternatives — including private investment in public equity deals, minority stakes, club deals and alliances (32%).
“As businesses prepare for a post-COVID world, including fundamentally reshaped economies and societies, the dealmaking environment will also materially change,” said Mark Purowitz, principal, Deloitte Consulting LLP, with Deloitte’s mergers and acquisitions consulting practice, and leader of the firm’s Future of M&A initiative.
“Companies were starting to expand their definition of M&A to include partnerships, alliances, joint ventures and other alternative investments that create intrinsic and long-lasting value, but COVID-19 has accelerated dealmakers’ needs to create more optionality for their organizations’ internal and external ecosystems.”
Virtual dealmaking to continue playing large role post-pandemic
87% of M&A professionals surveyed report that their organizations were able to effectively manage a deal in a purely virtual environment, so much so that 55% anticipate that virtual dealmaking will be the preferred platform even after the pandemic is over.
However, virtual dealmaking does not remain without its own challenges. Fifty-one percent noted that cybersecurity threats are their organizations’ biggest concern around executing deals virtually.
“When it comes to cyber in an M&A world — it’s important to develop cyber threat profiles of prospective targets and portfolio companies to determine the risks each present,” said Deborah Golden, Deloitte Risk & Financial Advisory, cyber and strategic risk leader, Deloitte & Touche LLP.
“CISOs understand how a data breach can negatively impact the valuation and the underlying deal structure itself. Leaving cyber out of that risk picture may lead to not only brand and reputational risk, but also significant and unaccounted remediation costs.”
Other virtual dealmaking concerns included the ability to forge relationships with management teams (40%) and extended regulatory approvals (39%). When it comes to effectively managing the integration phase in a virtual environment, technology integration (16%) and legal entity alignment or simplification (16%) are surveyed M&A executives’ largest and most prevalent hurdles.
“It may be too early to assess the long-term implications of virtual dealmaking as many of the deals currently in progress now are resulting from management relationships that were formed pre-COVID. We also expect integration in a virtual setting will become much more complex a few months from now,” said Thomson.
“Culture and compatibility issues should be given greater attention on the diligence side, as they pose major downstream integration implications.”
International dealmaking declines, focus on domestic-only deals
Interest in foreign M&A targets declined in 2020 as corporate executives reported a significant shift in their approach to international dealmaking, with 17% reporting no plans to execute cross-border deals in the current economic environment, an 8 percentage point increase from 2019.
In addition, 57% of M&A executives say less than half of their current transactions involve acquiring targets operating primarily in foreign markets.
Notably, the number of survey respondents interested in pursuing deals with U.K. targets dropped by 8 percentage points, while Chinese targets declined by 7 percentage points. Interest in Canadian (32%) and Central American (19%) targets remained highest.
Andrew Magnusson started his information security career 20 years ago and he decided to offer the knowledge he accumulated through this book, to help the reader eliminate security weaknesses and threats within their system.
As he points out in the introduction, bugs are everywhere, but there are actions and processes the reader can apply to eliminate or at least mitigate the associated risks.
The author starts off by explaining vulnerability management basics, the importance of knowing your network and the process of collecting and analyzing data.
He explains the importance of a vulnerability scanner and why it is essential to configure and deploy it correctly, since it gives valuable infromation to successfully complete a vulnerabilty management process.
The next step is to automate the processes, which prioritizes vulnerabilities and gives time to work on more severe issues, consequently boosting an organization’s security posture.
Finally, it is time to decide what to do with the vulnerabilities you have detected, which means choosing the appropriate security measures, whether it’s patching, mitigation or systemic measures. When the risk has a low impact, there’s also the option of accepting it, but this still needs to be documented and agreed upon.
The important part of this process, and perhaps also the hardest, is building relationships within the organization. The reader needs to respect office politics and make sure all the decisions and changes they make are approved by the superiors.
The second part of the book is practical, with the author guiding the reader through the process of building their own vulnerability management system with a detailed analysis of the open source tools they need to use such as Nmap, OpenVAS, and cve-search, everything supported by coding examples.
The reader will learn how to build an asset and vulnerability database and how to keep it accurate and up to date. This is especially important when generating reports, as those need to be based on recent vulnerability findings.
Who is it for?
Practical Vulnerability Management is aimed at security practitioners who are responsible for protecting their organization and tasked with boosting its security posture. It is assumed they are familiar with Linux and Python.
Despite the technical content, the book is an easy read and offers comprehensive solutions to keeping an organization secure and always prepared for possible attacks.
Global organizations continue to put their customers’ cardholder data at risk due to a lack of long term payment security strategy and execution, flags the Verizon report.
With many companies struggling to retain qualified CISOs or security managers, the lack of long-term security thinking is severely impacting sustained compliance within the Payment Card Industry Data Security Standard (PCI DSS).
Cybercriminals still mostly targeting payment data
Payment data remains one of the most sought after and lucrative targets by cybercriminals with 9 out of 10 data breaches being financially motivated, as highlighted by the report. Within the retail sector alone, 99 percent of security incidents were focused on acquiring payment data for criminal use.
On average only 27.9 percent of global organizations maintained full compliance with the PCI DSS, which was developed to help businesses that offer card payment facilities protect their payment systems from breaches and theft of cardholder data.
More concerning, this is the third successive year that a decline in compliance has occurred with a 27.5 percentage point drop since compliance peaked in 2016.
“Unfortunately we see many businesses lacking the resources and commitment from senior business leaders to support long-term data security and compliance initiatives. This is unacceptable,” said Sampath Sowmyanarayan, President, Global Enterprise, Verizon Business.
“The recent coronavirus pandemic has driven consumers away from the traditional use of cash to contactless methods of payment with payment cards as well as mobile devices. This has generated more electronic payment data and consumers trust businesses to safeguard their information.
“Payment security has to be seen as an on-going business priority by all companies that handle any payment data, they have a fundamental responsibility to their customers, suppliers and consumers.”
Few organizations successfully test security systems
Additional findings shine a spotlight on security testing where only 51.9 percent of organizations successfully test security systems and processes as well as unmonitored system access and where approximately two-thirds of all businesses track and monitor access to business critical systems adequately.
In addition, only 70.6 percent of financial institutions maintain essential perimeter security controls.
“This report is a welcome wake-up call to organizations that strong leadership is required to address failures to adequately manage payment security. The Verizon Business report aligns well with Omdia’s view that the alignment of security strategy with organizational strategy is essential for organizations to maintain compliance, in this case with PCI DSS 3.2.1 to provide appropriate levels of payment security.
“It makes clear that long-term data security and compliance combines the responsibilities of a number of roles, including the Chief Information Security Officer, the Chief Risk Officer, and Chief Compliance Officer, which Omdia concurs with,” comments Maxine Holt, senior research director at Omdia.
Difficulty to maintain PCI DSS compliance impacts all businesses
SMBs were flagged as having their own unique struggles with securing payment data. While smaller businesses generally have less card data to process and store than larger businesses, they have fewer resources and smaller budgets for security, impacting the resources available to maintain compliance with PCI DSS.
Often the measures needed to protect sensitive payment card data are perceived as too time-consuming and costly by these smaller organizations, but as the likelihood of a data breach for SMBs remains high it is imperative that PCI DSS compliance is maintained.
The on-going CISO challenge: Security strategy and compliance
The report also explores the challenges CISOs face in designing, implementing and maintaining an effective and sustainable security strategy, and how these can ultimately contribute to the breakdown of compliance and data security management.
These problems were not found to be technological in nature, but as a result of organizational weaknesses which could be resolved by more mature management skills including creating formalized processes; building a business model for security as well as defining a sound security strategy with operating models and frameworks.