For better or for worse, the global COVID-19 pandemic has confined most of us to our own countries (our houses and apartments, even), has changed how and from where we do our work, and has restricted our social lives.
The distractions and tools still available to help us battle our growing anxiety and sadness are few, but some of them, such as learning new things, are very powerful. Happily for all of us, many courses and trainings that were previously available only on-site are now virtual, opening new prospects and opportunities.
Among these new offerings is HITBSecTrain, an initiative launched by the organizers of Hack in the Box security conference, which has been offering deep-knowledge technical trainings in numerous cities (including Kuala Lumpur, Singapore, Amsterdam, Dubai, Bahrain, and Beijing) since 2003.
Known for featuring specialized security courses, HITB has worked with nearly 100 trainers across the years to offer cool, atypical trainings for security folks looking to hone their skills.
Now, in response to constant feedback from trainees who asked that HITB offer more specialized topics, more subject matter experts, more often in the year, they’ve set up HITBSecTrain, which will offer HITB trainings on a monthly basis instead of just during HITB conference events.
In October, the courses on offer taught attendees about big data analytics, malware reverse-engineering and threat hunting, bug hunting and cloud security.
In November, to coincide with the virtual edition of HITBCyberWeek 2020, 10 deep-knowledge technical trainings are being offered, covering topics such as: 5G security awareness, practical malware analysis and memory forensics, mobile hacking, secure coding and DevSecOps, applied data science and machine learning for cybersecurity, and more.
For now, while courses run virtual, classes are via livestream, with virtual lab environments and structured through a learning management system. All trainees will receive digital certs corresponding to their course choice, with additional badges awarded for completion of practical tests and quizzes.
With the new virtual format, HITB trainers are incorporating more interactive quizzes, collective exercises and practical assessments into their courses that will help trainees engage better with the content and with each other. This will also help to understand better whether trainees have effectively gained the skills they sought from their course.
Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.
What are email impersonation attacks?
Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.
Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.
Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.
Tip #1 – Look out for social engineering cues
Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.
Here are some common phrases and situations you should look out for in impersonation emails:
- Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
- Unusual purchase requests (e.g., iTunes gift cards).
- Employees requesting sudden changes to direct deposit information.
- Vendor sharing new.
This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.
Tip #2 – Always do a context check on emails
Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.
- Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
- Why would Netflix emails come to your business email address?
- Why would the IRS ask for your SSN and other sensitive personal information over email?
To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.
Tip #3 – Check for email address and sender name deviations
To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:
- Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
- Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
- Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
- Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
- Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).
Tip #4 – Learn the “greatest hits” of impersonation phrases
Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:
- “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
- “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
- “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.
Tip #5 – Use secondary channels of authentication
Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.
Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:
- Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
- Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
- Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.
Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.
These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.
With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.
Deepwatch Lens Score: SecOps maturity planning and benchmarking
Deepwatch Lens Score allows CISOs to quickly understand data source collection, active analytics, and what their Maturity Score is today and how to improve it. The powerful app is intuitive and delivers valuable data and insights to CISOs in a few minutes in the palm of their hand.
Entrust launches direct-to-card solution for instant physical and mobile ID issuance
Sigma systems deliver a seamless user experience across the issuance process for desktop and mobile printing needs. It eliminates the frustrations of printer set-up with a modular design and an out-of-the-box implementation that takes less than 30 minutes for users to begin issuing identities.
Splunk helps security teams modernize and unify their security operations in the cloud
Led by new, cloud-centric updates to Splunk Enterprise Security, Splunk Mission Control and the newly announced Splunk Mission Control Plug-In Framework, Splunk’s security operations suite enables Splunk customers to secure their cloud journey and solve their toughest cloud security challenges with data.
Incognia launches fraud detection solution for QR code contactless payments
Incognia’s fraud detection solution for QR code contactless payments uses location behavioral biometrics to verify buyer’s and seller’s real-time and historical location behavior to protect against fake QR codes, account takeovers and use of fake synthetic identities during transactions.
2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on.
After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal basis for cross border data transfers.
Severe fines levied for General Data Protection Regulation (GDPR) non-compliance showed organizations that the regulation is far from toothless and that data protection authorities are not easing up just because there is an ongoing global pandemic.
What can we expect in 2021? Undoubtedly, the number of data privacy cases brought before the courts will continue to rise. That’s not necessarily a bad thing: with each case comes additional clarity and precedent on many different areas of the regulation that, to date, is open to interpretation and conjecture.
Last time I spoke to the UK Information Commissioner’s Office regarding a technicality surrounding data subject access requests (DSARs) submitted by a representative, I was told that I was far from the only person enquiring about it, and this only illustrates some of the ambiguities faced by those responsible for implementing and maintaining compliance.
Of course, this is just the GDPR. There are many other data privacy legislative frameworks to consider. We fully expect 2021 to bring full and complete alignment of the ePrivacy Regulations with GDPR, and eradicate the conflict that exists today, particularly around consent, soft opt-in, etc., where the GDPR is very clear but the current Privacy and Electronic Communication Regulation (PECR) not quite so much.
These are just inside Europe but across the globe we’re seeing continued development of data localization laws, which organizations are mandated to adhere to. In the US, the California Consumer Privacy Act (CCPA) has kickstarted a swathe of data privacy reforms within many states, with many calls for something similar at the federal level.
The following year(s) will see that build and, much like with the GDPR, precedent-setting cases are needed to provide more clarity regarding the rules. Will Americans look to replace the shattered Privacy Shield framework, or will they adopt Standard Contractual Clauses (SCCs) more widely? SCCs are a very strong legal basis, providing the clauses are updated to align with the GDPR (something else we’d expect to see in 2021), and I suspect the US will take this road as the realization of the importance of trade with the EU grows.
Other noteworthy movements in data protection laws are happening in Russia with amendments to the Federal Law on Personal Data, which is taking a closer look at TLS as a protective measure, and in the Philippines, where the Personal Data Protection Act 2021 (PDPA) is being replaced by a new bill (currently a work in progress, but it’s coming).
One of the biggest events of 2021 will be the UK leaving the EU. The British implementation of the GDPR comes in the form of the UK Data Protection Bill 2018. Aside from a few deregulations, it’s the GDPR and that’s great… as far as it goes. Having strong local data privacy laws is good, but after enjoying 47 years (at the time of writing) of free movement within the Union, how will being outside of the EU impact British business?
It is thought and hoped that the UK will be granted an adequacy decision fairly swiftly, given that historically local UK laws aligned with those inside the Union, but there is no guarantee. The uncertainty around how data transfers will look in future might result in the British industry using more SCCs. The currently low priority plans to make Binding Corporate Rules (BCR) easier and more affordable will come sharply to the fore as the demand for them goes up.
One thing is certain, it’s going to be a fascinating year for data privacy and we are excited to see clearer definitions, increased certification, precedent-setting case law and whatever else unfolds as we continue to navigate a journey of governance, compliance and security.
Security researcher Rafay Baloch has discovered address bar spoofing vulnerabilities in several mobile browsers, which could allow attackers to trick users into sharing sensitive information through legitimate-looking phishing sites.
“With ever growing sophistication of spear phishing attacks, exploitation of browser-based vulnerabilities such as address bar spoofing may exacerbate the success of spear phishing attacks and hence prove to be very lethal,” he noted.
“First and foremost, it is easy to persuade the victim into stealing credentials or distributing malware when the address bar points to a trusted website and giving no indicators forgery, secondly since the vulnerability exploits a specific feature in a browser, it can evade several anti-phishing schemes and solutions.”
The address bar spoofing vulnerabilities and affected mobile browsers
Unlike desktop browsers, mobile browsers are not great at showing security indicators that might point to a site’s malicious nature. In fact, pretty much the only consistent indicator is the address bar (i.e. a suspicious-looking URL in it).
So if the attacker is able to spoof the URL and show the one the user expects – for example, apple.com for a phishing site that impersonates Apple – chances are good the user will enter their login credentials into it. The vulnerabilities discovered by Baloch permit exactly that, and affect the:
- UC Browser, Opera Mini, Yandex Browser and RITS Browser for Android
- Opera Touch, Bolt Browser and Safari for iOS
“By messing with the timing between page loads and when the browser gets a chance to refresh the address bar, an attacker can cause either a pop-up to appear to come from an arbitrary website or can render content in the browser window that falsely appears to come from an arbitrary website.”
Fixes for some, not for others
As 60+ days have passed since the vendors were appraised of the existence of the flaws, Baloch released some details and several PoC exploits.
In the meantime:
- Apple and Yandex pushed out fixes
- Opera released security updates for Opera Touch and is expected to do the same for Opera Mini in early November
- Raise IT Solutions planned to release a fix for the RITS Browser this week, but hasn’t yet
- UCWeb (the creators of the UC Browser) haven’t responded to the report, and it’s doubtful whether the creator of the Bolt Browser knowns about the vulnerabilities, as they haven’t been able to contact him (disclosure notification bounced when sent to the support email listed)
Users should implement the offered updates (if they don’t have the “auto-update” option switched on). Those who use browsers that still don’t have fixes available might want to consider switching to a browser that’s more actively developed/patched.
But all should be extra careful when thinking about clicking on links received via text or email from unknown sources. These flaws have been remediated, but other similar ones will surely be discovered in the future – let’s just hope it’s by researchers, and not attackers.
The US Cybersecurity and Infrastructure Security Agency (CISA) has released a list of 25 vulnerabilities Chinese state-sponsored hackers have been recently scanning for or have exploited in attacks.
“Most of the vulnerabilities […] can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access or for external web services, and should be prioritized for immediate patching,” the agency noted.
The list of vulnerabilities exploited by Chinese hackers
The list is as follows:
The vulnerability list they shared is likely not complete, as Chinese-sponsored actors may use other known and unknown vulnerabilities. All network defenders – but especially those working on securing critical systems in organizations on which US national security and defense are depending on – should consider patching these as a priority.
Mitigations are also available
If patching is not possible, the risk of exploitation for most of these can be lowered by implementing mitigations provided by the vendors. CISA also advises implementing general mitigations like:
- Disabling external management capabilities and setting up an out-of-band management network
- Blocking obsolete or unused protocols at the network edge and disabling them in device configurations
- Isolating Internet-facing services in a network DMZ to reduce the exposure of the internal network
- Enabling robust logging of Internet-facing services and monitoring the logs for signs of compromise
The agency also noted that the problem of data stolen or modified before a device has been patched cannot be solved only by patching, and that password changes and reviews of accounts are a good practice.
Additional “most exploited vulnerabilities” lists
Earlier this year, CISA released a list of old and new software vulnerabilities that are routinely exploited by foreign cyber actors and cyber criminals, the NSA and the Australian Signals Directorate released a list of web application vulnerabilities that are commonly exploited to install web shell malware, and Recorded Future published a list of ten software vulnerabilities most exploited by cybercriminals in 2019.
Admins and network defenders are encouraged to peruse them and patch those flaws as well.
Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.
Moving to the cloud and staying secure
Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.
When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.
Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.
Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.
Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.
Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.
When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.
It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.
It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.
Key influencers of a customer’s trust in their cloud provider include:
- Data location
- Investigation status and location of data
- Data segregation (keeping cloud customers’ data separated from others)
- Privileged access
- Backup and recovery
- Regulatory compliance
- Long-term viability
A TaaS example: Google Cloud
Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:
- Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
- Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment
Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:
- Limited data center access from a physical standpoint, adhering to strict access controls
- Disclosing how and why customer data is accessed
- Incorporating a process of access approvals
Multi-layered security for a trusted infrastructure
Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.
Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.
Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.
Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.
Zerologon: Unexpected squall
Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.
In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.
Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.
Beware of sharks
Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.
Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.
This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.
My hero zero (standing privilege)
Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.
An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.
Peering into the depths
To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.
The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.
AI & UEBA to the rescue
Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.
Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.
UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.
Your compass points to ZTA
Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).
Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.
It’s time to change the way we think about cybersecurity and risk management. Cybersecurity is no longer an IT problem to solve or a “necessary evil” to cost manage. Rather, cybersecurity has rapidly stormed the boardroom as a result of high-profile and costly data breaches.
Get the following insights from this webinar:
- Recent events have changed our focus from protecting the perimeter
- Risk management is a formula based on the cost of an undesirable outcome times the likelihood of its occurrence
- Embracing cybersecurity as a factor in corporate risk management means firms can adapt quickly
The Sandworm Team hacking group is part of Unit 74455 of the Russian Main Intelligence Directorate (GRU), the US Department of Justice (DoJ) claimed as it unsealed an indictment against six hackers and alleged members on Monday.
Sandworm Team attacks
“These GRU hackers and their co-conspirators engaged in computer intrusions and attacks intended to support Russian government efforts to undermine, retaliate against, or otherwise destabilize: Ukraine; Georgia; elections in France; efforts to hold Russia accountable for its use of a weapons-grade nerve agent, Novichok, on foreign soil; and the 2018 PyeongChang Winter Olympic Games after Russian athletes were banned from participating under their nation’s flag, as a consequence of Russian government-sponsored doping effort,” the DoJ alleges.
“Their computer attacks used some of the world’s most destructive malware to date, including: KillDisk and Industroyer, which each caused blackouts in Ukraine; NotPetya, which caused nearly $1 billion in losses to the three victims identified in the indictment alone; and Olympic Destroyer, which disrupted thousands of computers used to support the 2018 PyeongChang Winter Olympics.”
At the same time, the UK National Cyber Security Centre says that they asses “with high confidence” that the group has been actively targeting organizations involved in the 2020 Olympic and Paralympic Games before they were postponed.
“In the attacks on the 2018 Games, the GRU’s cyber unit attempted to disguise itself as North Korean and Chinese hackers when it targeted the opening ceremony. It went on to target broadcasters, a ski resort, Olympic officials and sponsors of the games. The GRU deployed data-deletion malware against the Winter Games IT systems and targeted devices across the Republic of Korea using VPNFilter,” the UK NCSC said.
“The NCSC assesses that the incident was intended to sabotage the running of the Winter Olympic and Paralympic Games, as the malware was designed to wipe data from and disable computers and networks. Administrators worked to isolate the malware and replace the affected computers, preventing potential disruption.”
The UK government confirmed their prior assessments that many of the aforementioned attacks had been the work of the Russian GRU.
Sandworm Team hackers
Sandworm Team (aka “Telebots,” “Voodoo Bear,” “Iron Viking,” and “BlackEnergy”) is the group behind many conspicuous attacks in the last half a decade, the DoJ claims, all allegedly performed under the aegis of the Russian government.
The six alleged Sandworm Team hackers against which the indictments have been brought were responsible for a variety of tasks:
One of them, Anatoliy Kovalev, has been previously charged by a US court “with conspiring to gain unauthorized access into the computers of US persons and entities involved in the administration of the 2016 US elections,” the DoJ noted.
The US investigation into the group has lasted for several years, and had help from Ukrainian authorities, the Governments of the Republic of Korea and New Zealand, Georgian authorities, and the United Kingdom’s intelligence services, victims, and several IT and IT security companies.
Political and other ramifications
Warrants for the arrest of the six alleged Sandworm Team members have been drawn, but chances are slim-to-nonexistent that arrests will be performed in the near or far future.
The Russian government’s official position is that the accusations are unbased and part of an “information war against Russia”.
Russia starts responding to “accusations” of hacking operations. Chairman of the State Parliament committee on international affairs Dmitry Novikov says this is part of “information war against Russia”. https://t.co/ifSuCM23VN
— Lukasz Olejnik (@lukOlejnik) October 20, 2020
It’s unusual to see the US mount criminal charges against intelligence officers that were engaged in cyber-espionage operations outside the US, but the rationale here is that many of the attacks resulted in real-world consequences that were aimed at undermining the target countries’ governments and destabilizing the countries themselves, and that they affected individuals, civilian critical infrastructure (including organizations in the US), and private sector companies.
“The crimes committed by Russian government officials were against real victims who suffered real harm. We have an obligation to hold accountable those who commit crimes – no matter where they reside and no matter for whom they work – in order to seek justice on behalf of these victims,” commented US Attorney Scott W. Brady for the Western District of Pennsylvania.
There are currently no laws and norms regulating cyber attacks and cyber espionage in peacetime, but earlier this year Russian Federation president Vladimir Putin called for an agreement between Russia and the US that would guarantee the two nations would not try to meddle with each other’s elections and internal affairs via “cyber” means.
This latest round of indictments by the US is unlikely to act as a deterrent but, as Dr. Panayotis Yannakogeorgos recently told Help Net Security, indictments and public attribution of attacks serve several other purposes.
Another interesting result of this indictment may be felt by insurance companies and their customers that have suffered disruption due to cyber attacks mounted by nation-states. Some of their insurance policies may not cover cyber incidents that could be considered an “act of war” (e.g., the NotPetya attacks).
We are beginning to shift away from what has long been our first and last line of defense: the password. It’s an exciting time. Since the beginning, passwords have aggravated people. Meanwhile, passwords have become the de facto first step in most attacks. Yet I can’t help but think, what will the consequences of our actions be?
Intended and unintended consequences
Back when overhead cameras came to the express toll routes in Ontario, Canada, it wasn’t long before the SQL injection to drop tables made its way onto bumper stickers. More recently in California, researcher Joe Tartaro purchased a license plate that said NULL. With the bumper stickers, the story goes, everyone sharing the road would get a few hours of toll-free driving. But with the NULL license plate? Tartaro ended up on the hook for every traffic ticket with no plate specified, to the tune of thousands of dollars.
One organization I advised recently completed an initiative to reduce the number of agents on the endpoint. In a year when many are extending the lifespan and performance of endpoints while eliminating location-dependent security controls, this shift makes strategic sense.
Another CISO I spoke with recently consolidated multi-factor authenticators onto a single platform. Standardizing the user experience and reducing costs is always a pragmatic move. Yet these moves limited future moves. In both cases, any initiative by the security team which changed authenticators or added agents ended up stuck in park, waiting for a greenlight.
Be careful not to limit future moves
To make moves that open up possibilities, security teams think along two lines: usability and defensibility. That is, how will the change impact the workforce, near term and long term? On the opposite angle, how will the change affect criminal behavior, near term and long term?
Whether decreasing the number of passwords required through single sign-on (SSO) or eliminating the password altogether in favor of a strong authentication factor (passwordless), the priority is on the workforce experience. The number one reason for tackling the password problem given by security leaders is improving the user experience. It is a rare security control that makes people’s lives easier and leadership wants to take full advantage.
There are two considerations when planning for usability. The first is ensuring the tactic addresses the common friction points. For example, with passwordless, does the approach provide access to devices and applications people work with? Is it more convenient and faster what they do today? The second consideration is evaluating what the tactic allows the security team to do next. Does the approach to passwordless or SSO block a future initiative due to lock-in? Or will the change enable us to take future steps to secure authentication?
The one thing we know for certain is, whatever steps we take, criminals will take steps to get around us. In the sixty years since the first password leak, we’ve done everything we can, using both machine and man. We’ve encrypted passwords. We’ve hashed them. We increased key length and algorithm strength. At the same time, we’ve asked users to create longer passwords, more complex passwords, unique passwords. We’ve provided security awareness training. None of these steps were taken in a vacuum. Criminals cracked files, created rainbow tables, brute-forced and phished credentials. Sixty years of experience suggests the advancement we make will be met with an advanced attack.
We must increase the trust in authentication while increasing usability, and we must take steps that open up future options. Security teams can increase trust by pairing user authentication with device authentication. Now the adversary must both compromise the authentication and gain access to the device.
To reduce the likelihood of device compromise, set policies to prevent unpatched, insecure, infected, or compromised devices from authenticating. The likelihood can be even further reduced by capturing telemetry, modeling activity, and comparing activity to the user’s baseline. Now the adversary must compromise authentication, gain access to the endpoint device, avoid endpoint detection, and avoid behavior analytics.
Technology is full of unintended consequences. Some lead to tollfree drives and others lead to unexpected fees. Some open new opportunities, others new vulnerabilities. Today, many are moving to improve user experience by reducing or removing passwords. The consequences won’t be known immediately. We must ensure our approach meets the use cases the workforce cares about while positioning us to address longer-term goals and challenges.
Additionally, we must get ahead of adversaries and criminals. With device trust and behavior analytics, we must increase trust in passwordless authentication. We can’t predict what is to come, but these are steps security teams can take today to better position and protect our organizations.
What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.
How does confidential computing enhance the overall security of a complex enterprise architecture?
We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.
Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.
At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.
Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?
Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.
Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.
All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.
What type of investments does confidential computing require? What technologies and techniques are involved?
To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.
To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.
Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.
How is confidential computing helping large organizations with a massive work-from-home movement?
As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.
With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.
How do you see the work of the Confidential Computing Consortium evolving in the near future?
Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.
SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals.
Growing deployment of next-gen tools and capabilities
The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months.
These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security.
“The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” said Stephan Jou, CTO Interset at Micro Focus.
“We’re observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise.”
Organizations relying on the MITRE ATT&K framework
As the volume of threats rise, the report finds that 90 percent of organizations are relying on the MITRE ATT&K framework as a tool for understanding attack techniques, and that the most common reason for relying on the knowledge base of adversary tactics is for detecting advanced threats.
Further, the scale of technology needed to secure today’s digital assets means SOC teams are relying more heavily on tools to effectively do their jobs.
With so many responsibilities, the report found that SecOps teams are using numerous tools to help secure critical information, with organizations widely using 11 common types of security operations tools and with each tool expected to exceed 80% adoption in 2021.
- COVID-19: During the pandemic, security operations teams have faced many challenges. The biggest has been the increased volume of cyberthreats and security incidents (45 percent globally), followed by higher risks due to workforce usage of unmanaged devices (40 percent globally).
- Most severe SOC challenges: Approximately 1 in 3 respondents cite the two most severe challenges for the SOC team as prioritizing security incidents and monitoring security across a growing attack surface.
- Cloud journeys: Over 96 percent of organizations use the cloud for IT security operations, and on average nearly two-thirds of their IT security operations software and services are already deployed in the cloud.
Microsoft and Adobe released out-of-band security updates for Visual Studio Code, the Windows Codecs Library, and Magento.
All the updates fix vulnerabilities that could be exploited for remote code execution, but the good news is that none of them are being actively exploited by attackers (yet!).
“To exploit this vulnerability, an attacker would need to convince a target to clone a repository and open it in Visual Studio Code. Attacker-specified code would execute when the target opens the malicious ‘package.json’ file,” Microsoft explained.
If the target uses an account with administrative privileges, the attacker can take complete control of the affected system.
Microsoft has also fixed a RCE (CVE-2020-17022) in the way that Microsoft Windows Codecs Library handles objects in memory, which could be triggered by a program processing a specially crafted image file.
It only affects Windows 10 users, and only if they installed the optional HEVC or “HEVC from Device Manufacturer” media codecs from Microsoft Store.
“Affected customers will be automatically updated by Microsoft Store. Customers do not need to take any action to receive the update,” the company noted, and explained that “servicing for store apps/components does not follow the monthly ‘Update Tuesday’ cadence, but are offered whenever necessary.”
Among the plugged security holes are two critical ones:
- CVE-2020-24407 – a file upload allow list bypass that could be exploited to achieve code execution
- CVE-2020-24400 – an SQL injection that could allow for arbitrary read or write access to database
Vulnerability scanners can be a very useful addition to any development or operations process. Since a typical vulnerability scanner needs to detect vulnerabilities in deployed software, they are (generally) not dependent on the language or technology used for the application they are scanning.
This often doesn’t make them the top choice for detecting a large number of vulnerabilities or even detecting fickle bugs or business logic issues, but makes them great and very common tools for testing a large number of diverse applications, where such dynamic application security testing tools are indispensable. This includes testing for security defects in software that is being currently developed as a part of a SDLC process, reviewing third-party applications that are deployed inside one’s network (as a part of a due diligence process) or – most commonly – finding issues in all kinds of internally developed applications.
We reviewed Netsparker Enterprise, which is one of the industry’s top choices for web application vulnerability scanning.
Netsparker Enterprise is primarily a cloud-based solution, which means it will focus on applications that are publicly available on the open internet, but it can also scan in-perimeter or isolated applications with the help of an agent, which is usually deployed in a pre-packaged Docker container or a Windows or Linux binary.
To test this product, we wanted to know how Netsparker handles a few things:
1. Scanning workflow
2. Scan customization options
3. Detection accuracy and results
4. CI/CD and issue tracking integrations
5. API and integration capabilities
6. Reporting and remediation efforts.
To assess the tool’s detection capabilities, we needed a few targets to scan and assess.
After some thought, we decided on the following targets:
1. DVWA – Damn Vulnerable Web Application – An old-school extremely vulnerable application, written in PHP. The vulnerabilities in this application should be detected without an issue.
3. Vulnapi – A python3-based vulnerable REST API, written in the FastAPI framework running on Starlette ASGI, featuring a number of API based vulnerabilities.
After logging in to Netsparker, you are greeted with a tutorial and a “hand-holding” wizard that helps you set everything up. If you worked with a vulnerability scanner before, you might know what to do, but this feature is useful for people that don’t have that experience, e.g., software or DevOps engineers, who should definitely use such tools in their development processes.
Initial setup wizard
Scanning targets can be added manually or through a discovery feature that will try to find them by matching the domain from your email, websites, reverse IP lookups and other methods. This is a useful feature if other methods of asset management are not used in your organization and you can’t find your assets.
New websites or assets for scanning can be added directly or imported via a CSV or a TXT file. Sites can be organized in Groups, which helps with internal organization or per project / per department organization.
Adding websites for scanning
Scans can be defined per group or per specific host. Scans can be either defined as one-off scans or be regularly scheduled to facilitate the continuous vulnerability remediation process.
To better guide the scanning process, the classic scan scope features are supported. For example, you can define specific URLs as “out-of-scope” either by supplying a full path or a regex pattern – a useful option if you want to skip specific URLs (e.g., logout, user delete functions). Specific HTTP methods can also be marked as out-of-scope, which is useful if you are testing an API and want to skip DELETE methods on endpoints or objects.
Initial scan configuration
Scan scope options
One feature we quite liked is the support for uploading the “sitemap” or specific request information into Netsparker before scanning. This feature can be used to import a Postman collection or an OpenAPI file to facilitate scanning and improve detection capabilities for complex applications or APIs. Other formats such as CSV, JSON, WADL, WSDL and others are also supported.
For the red team, loading links and information from Fiddler, Burp or ZAP session files is supported, which is useful if you want to expand your automated scanning toolbox. One limitation we encountered is the inability to point to an URL containing an OpenAPI definition – a capability that would be extremely useful for automated and scheduled scanning workflows for APIs that have Swagger web UIs.
Scan policies can be customized and tuned in a variety of ways, from the languages that are used in the application (ASP/ASP.NET, PHP, Ruby, Java, Perl, Python, Node.js and Other), to database servers (Microsoft SQL server, MySQL, Oracle, PostgreSQL, Microsoft Access and Others), to the standard choice of Windows or Linux based OSes. Scan optimizations should improve the detection capability of the tool, shorten scanning times, and give us a glimpse where the tool should perform best.
The next important question is, does it blend… or integrate? From an integration standpoint, sending email and SMSes about the scan events is standard, but support for various issue tracking systems like Jira, Bitbucket, Gitlab, Pagerduty, TFS is available, and so is support for Slack and CI/CD integration. For everything else, there is a raw API that can be used to tie in Netsparker to other solutions if you are willing to write a bit of integration scripting.
One really well-implemented feature is the support for logging into the testing application, as the inability to hold a session and scan from an authenticated context in the application can lead to a bad scanning performance.
Netsparker has the support for classic form-based login, but 2FA-based login flows that require TOTP or HOTP are also supported. This is a great feature, as you can add the OTP seed and define the period in Netsparker, and you are all set to scan OTP protected logins. No more shimming and adding code to bypass the 2FA method in order to scan the application.
Custom scripting workflow for authentication
If we had to nitpick, we might point out that it would be great if Netsparker also supported U2F / FIDO2 implementations (by software emulating the CTAP1 / CTAP2 protocol), since that would cover the most secure 2FA implementations.
In addition to form-based authentication, Basic NTLM/Kerberos, Header based (for JWTs), Client Certificate and OAuth2-based authentication is also supported, which makes it easy to authenticate to almost any enterprise application. The login / logout flow is also verified and supported through a custom dialog, where you can verify that the supplied credentials work, and you can configure how to retain the session.
Login verification helper
And now for the core of this review: what Netsparker did and did not detect.
In short, everything from DVWA was detected, except broken client-side security, which by definition is almost impossible to detect with security scanning if custom rules aren’t written. So, from a “classic” application point of view, the coverage is excellent, even the out-of-date software versions were flagged correctly. Therefore, for normal, classic stateful applications, written in a relatively new language, it works great.
One interesting point for vulnerability detection is that Netsparker uses an engine that tries to verify if the vulnerability is exploitable and will try to create a “proof” of vulnerability, which reduces false positives.
On the negative side, no vulnerabilities in WebSocket-based communications were found, and neither was the API endpoint that implemented insecure YAML deserialization with pyYAML. By reviewing the Netsparker knowledge base, we also found that there is no support for websockets and deserialization vulnerabilities.
That’s certainly not a dealbreaker, but something that needs to be taken into account. This also reinforces the need to use a SAST-based scanner (even if just a free, open source one) in the application security scanning stack, to improve test coverage in addition to other, manual based security review processes.
Multiple levels of detail (from extensive, executive summary, to PCI-DSS level) are supported, both in a PDF or HTML export option. One nice feature we found is the ability to create F5 and ModSecurity rules for virtual patching. Also, scanned and crawled URLs can be exported from the reporting section, so it’s easy to review if your scanner hit any specific endpoints.
Scan results dashboard
Scan result details
Instead of describing the reports, we decided to export a few and attach them to this review for your enjoyment and assessment. All of them have been submitted to VirusTotal for our more cautious readers.
Netsparker’s reporting capabilities satisfy our requirements: the reports contain everything a security or AppSec engineer or a developer needs.
Since Netsparker integrates with JIRA and other ticketing systems, the general vulnerability management workflow for most teams will be supported. For lone security teams, or where modern workflows aren’t integrated, Netsparker also has an internal issue tracking system that will let the user track the status of each found issue and run rescans against specific findings to see if mitigations were properly implemented. So even if you don’t have other methods of triage or processes set up as part of a SDLC, you can manage everything through Netsparker.
Netsparker is extremely easy to set up and use. The wide variety of integrations allow it to be integrated into any number of workflows or management scenarios, and the integrated features and reporting capabilities have everything you would want from a standalone tool. As far as features are concerned, we have no objections.
The login flow – the simple interface, the 2FA support all the way to the scripting interface that makes it easy to authenticate even in the more complex environments, and the option to report on the scanned and crawled endpoints – helps users discover their scanning coverage.
Taking into account the fact that this is an automated scanner that relies on “black boxing” a deployed application without any instrumentalization on the deployed environment or source code scanning, we think it is very accurate, though it could be improved (e.g., by adding the capability of detecting deserialization vulnerabilities). Following the review, Netsparker has confirmed that adding the capability of detecting deserialization vulnerabilities is included in the product development plans.
Nevertheless, we can highly recommend Netsparker.
The importance of privacy and data protection is a critical issue for organizations as it transcends beyond legal departments to the forefront of an organization’s strategic priorities.
A FairWarning research, based on survey results from more than 550 global privacy and data protection, IT, and compliance professionals outlines the characteristics and behaviors of advanced privacy and data protection teams.
By examining the trends of privacy adoption and maturity across industries, the research uncovers adjustments that security and privacy leaders need to make to better protect their organization’s data.
The prevalence of data and privacy attacks
Insights from the research reinforce the importance of privacy and data protection as 67% of responding organizations documented at least one privacy incident within the past three years, and over 24% of those experienced 30 or more.
Additionally, 50% of all respondents reported at least one data breach in the last three years, with 10% reporting 30 or more.
Overall immaturity of privacy programs
Despite increased regulations, breaches and privacy incidents, organizations have not rapidly accelerated the advancement of their privacy programs as 44% responded they are in the early stages of adoption and 28% are in middle stages.
Healthcare and software rise to the top
Despite an overall lack of maturity across industries, healthcare and software organizations reflect more maturity in their privacy programs, as compared to insurance, banking, government, consulting services, education institutions and academia.
Harnessing the power of data and privacy programs
Respondents understand the significant benefits of a mature privacy program as organizations experience greater gains across every area measured including: increased employee privacy awareness, mitigating data breaches, greater consumer trust, reduced privacy complaints, quality and innovation, competitive advantage, and operational efficiency.
Of note, more mature companies believe they experience the largest gain in reducing privacy complaints (30.3% higher than early stage respondents).
Attributes and habits of mature privacy and data protection programs
Companies with more mature privacy programs are more likely to have C-Suite privacy and security roles within their organization than those in the mid- to early-stages of privacy program development.
Additionally, 88.2% of advanced stage organizations know where most or all of their personally identifiable information/personal health information is located, compared to 69.5% of early stage respondents.
Importance of automated tools to monitor user activity
Insights reveal a clear distinction between the maturity levels of privacy programs and related benefits of automated tools as 54% of respondents with more mature programs have implemented this type of technology compared with only 28.1% in early stage development.
Automated tools enable organizations to monitor all user activity in applications and efficiently identify anomalous activity that signals a breach or privacy violation.
“It is exciting to see healthcare at the top when it comes to privacy maturity. However, as we dig deeper into the data, we find that 37% of respondents with 30 or more breaches are from healthcare, indicating that there is still more work to be done.
“This study highlights useful guidance on steps all organizations can take regardless of industry or size to advance their program and ensure they are at the forefront of privacy and data protection.”
“As the research has demonstrated, it is imperative that security and privacy professionals recognize the importance of implementing privacy and data protection programs to not only reduce privacy complaints and data breaches, but increase operational efficiency.”
Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam.
Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% in 2019 to 21% this year.
“The concern for automation among younger professionals in cybersecurity was surprising to us. In trying to understand this sentiment, we could partially attribute it to lack of on-the-job training using automation technology,” said Samantha Humphries, security strategist at Exabeam.
“As we noted earlier this year in our State of the SOC research, ambiguity around career path or lack of understanding about automation can have an impact on job security. It’s also possible that this is a symptom of the current economic climate or a general lack of experience navigating the workforce during a global recession.”
AI and ML: A threat to job security?
Of respondents under the age of 45, 53% agreed or strongly agreed that AI and ML are a threat to their job security. This is contrasted with just 25% of respondents 45 and over who feel the same, possibly indicating that subsets of security professionals in particular prefer to write rules and manually investigate.
Interestingly, when asked directly about automation software, 89% of respondents under 45 years old believed it would improve their jobs, yet 47% are still threatened by its use. This is again in contrast with the 45 and over demographic, where 80% believed automation would simplify their work, and only 22% felt threatened by its use.
Examining the sentiments around automation by region, 47% of US respondents were concerned about job security when automation software is in use, as well as SG (54%), DE (42%), AUS (40%) and UK (33%).
In the survey, which drew insights from professionals throughout the US, the UK, AUS, Canada, India and the Netherlands, only 10% overall believed that AI and automation were a threat to their jobs.
On the flip side, there were noticeable increases in job approval across the board, with an upward trend in satisfaction around role and responsibilities (96%), salary (87%) and work/life balance (77%).
Diversity showing positive signs of improvement
When asked what else they enjoyed about their jobs, respondents listed working in an environment with professional growth (15%) as well as opportunities to challenge oneself (21%) as top motivators.
53% reported jobs that are either stressful or very stressful, which is down from last year (62%). Interestingly, despite being among those that are generally threatened by automation software, 100% of respondents aged 18-24 reported feeling secure in their roles and were happiest with their salaries (93%).
Though the number of female respondents increased this year, it remains to be seen whether this will emerge as a trend. This year’s male respondents (78%) are down 13% from last year (91%).
In 2019, nearly 41% were in the profession for at least 10 years or more. This year, a larger percentage (83%) have 10 years or less, and 34% have been in the cybersecurity industry for five years or less. Additionally, one-third do not have formal cybersecurity degrees.
“There is evidence that automation and AI/ML are being embraced, but this year’s survey exposed fascinating generational differences when it comes to professional openness and using all available tools to do their jobs,” said Phil Routley, senior product marketing manager, APJ, Exabeam.
“And while gender diversity is showing positive signs of improvement, it’s clear we still have a very long way to go in breaking down barriers for female professionals in the security industry.”
Earlier this week SonicWall patched 11 vulnerabilities affecting its Network Security Appliance (NSA). Among those is CVE-2020-5135, a critical stack-based buffer overflow vulnerability in the appliances’ VPN Portal that could be exploited to cause denial of service and possibly remote code execution.
The SonicWall NSAs are next-generation firewall appliances, with a sandbox, an intrusion prevention system, SSL/TLS decryption and inspection capabilities, network-based malware protection, and VPN capabilities.
CVE-2020-5135 was discovered by Nikita Abramov of Positive Technologies and Craig Young of Tripwire’s Vulnerability and Exposures Research Team (VERT), and has been confirmed to affect:
- SonicOS 18.104.22.168-79n and earlier
- SonicOS 22.214.171.124-4n and earlier
- SonicOS 126.96.36.199-93o and earlier
- SonicOSv 188.8.131.52-44v-21-794 and earlier
- SonicOS 184.108.40.206-1
“The flaw can be triggered by an unauthenticated HTTP request involving a custom protocol handler. The vulnerability exists within the HTTP/HTTPS service used for product management as well as SSL VPN remote access,” Tripwire VERT explained.
“This flaw exists pre-authentication and within a component (SSLVPN) which is typically exposed to the public Internet.”
By using Shodan, both Tripwire and Tenable researchers discovered nearly 800,000 SonicWall NSA devices with the affected HTTP server banner exposed on the internet. Though, as the latter noted, it is impossible to determine the actual number of vulnerable devices because their respective versions could not be determined (i.e., some may already have been patched).
A persistent DoS condition is apparently easy for attackers to achieve, as it requires no prior authentication and can be triggered by sending a specially crafted request to the vulnerable service/SSL VPN portal.
VERT says that a code execution exploit is “likely feasible,” though it’s a bit more difficult to pull off.
Mitigation and remediation
There is currently no evidence that the flaw is being actively exploited nor is there public PoC exploitation code available, so admins have a window of opportunity to upgrade affected devices.
Aside from implementing the offered update, they can alternatively disconnect the SSL VPN portal from the internet, though this action does not mitigate the risk of exploitation of some of the other flaws fixed by the latest updates.
Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.
New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.
But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.
The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.
Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.
Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.
Sub sectors vary
Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.
Cloud deployment impacts
As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.
It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.
The healthcare supply chain is at risk
It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.
When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.
“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.
“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”
Cyborg Security launches HUNTR platform to help orgs tackle cyber threats
Cyborg Security’s HUNTR platform provides advanced and contextualized threat hunting and detection packages containing behaviorally based threat hunting content, threat emulation, and detailed runbooks, supplying organizations what they need to evolve their security analysts into skilled hunters.
Cloudflare One: A cloud-based network-as-a-service solution for the remote workforce
As more businesses rely on the internet to operate, Cloudflare One protects and accelerates the performance of devices, applications, and entire networks to keep workforces secure. Now businesses can protect their workforce in a flexible and scalable way, without compromising security as distributed teams work from multiple devices and personal networks.
Booz Allen Hamilton unveils SnapAttack, bringing together red and blue security teams
By unifying the security lifecycle into a single solution, SnapAttack enables red and blue teams to work together, emulating attacks from intelligence data, sharing insights of malicious behavior, and developing vendor-agnostic behavioral detection analytics to stop advanced adversaries.
BAE Systems unveils cyber-threat detection and mitigation solution for U.S. military platforms
The Fox Shield suite is designed to help platforms detect, respond, and recover from cyber attacks in real time. The system’s cyber resilience capabilities can be integrated into ground, air, and space vehicles to protect our warfighters and platforms from cyber attacks designed to access and degrade mission capabilities.
Shujinko AuditX: Simplifying, automating and modernizing audit preparation and compliance
AuditX automates evidence collection, maps evidence across multiple controls and across different standards, streamlines audit workflow and clarifies communication across teams and with auditors. AuditX organizes evidence in a centralized library for final readiness review and provides a 360-degree dashboard to make the entire process highly visible and predictable.
Masergy extends the value of Masergy SD-WAN Secure to home and mobile users
Masergy’s Work From Anywhere solutions include SD-WAN Secure Home for executives and power users requiring unwavering reliability from their home office connections and SD-WAN On the Go for mobile users needing secure access to corporate and cloud applications.
C2A Security launches AutoSec, an automotive cybersecurity lifecycle management platform
C2A Security announced the launch of its flagship cybersecurity product, AutoSec, a cybersecurity lifecycle management platform. AutoSec meets the rapidly-evolving challenges of vehicle cybersecurity with an open platform that empowers industry stakeholders to identify and mitigate cyber attacks.
Starting next week, Zoom users – both those who are on one of the paid plans and those who use it for free – will be able to try out the solution’s new end-to-end encryption (E2EE) option.
In this first rollout phase, all meeting participants:
- Must join from the Zoom desktop client, mobile app, or Zoom Rooms
- Must enable the E2EE option at the account level and then for each meeting they want to use E2EE for
How does Zoom E2EE work?
“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live,” the company explained.
“In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”
The option will be available as a technical preview and will work for meetings including up to 200 participants. In order to join such a meeting, they must have the E2EE setting enabled.
For the moment, though, enabling E2EE for a meeting means giving up on certain features: “join before host”, cloud recording, streaming, live transcription, Breakout Rooms, polling, 1:1 private chat, and meeting reactions.
“Participants will also see the meeting leader’s security code that they can use to verify the secure connection. The host can read this code out loud, and all participants can check that their clients display the same code,” the company added.
E2EE for everybody
In June 2020, Zoom CEO Eric Yuan announced the company’s intention to offer E2EE only to paying customers, but after a public outcry they decided to extend its benefits to customers with free accounts as well.
“Free/Basic users seeking access to E2EE will participate in a one-time verification process that will prompt the user for additional pieces of information, such as verifying a phone number via text message. Many leading companies perform similar steps to reduce the mass creation of abusive accounts,” the company reiterated again with this latest announcement.