Credential stuffing attacks are taking up a lot of the oxygen in cybersecurity rooms these days. A steady blitz of large-scale cybersecurity breaches in recent years have flooded the dark web with passwords and other credentials that are used in subsequent attacks such as those on Reddit and State Farm, as well as widespread efforts to exploit the remote work and online get-togethers resulting from the COVID-19 pandemic.
But while enterprises are rightly worried about weathering a hurricane of credential-stuffing attacks, they also need to be concerned about more subtle, but equally dangerous, threats to APIs that can slip in under the radar.
Attacks that exploit APIs, beyond credential stuffing, can start small with targeted probing of unique API logic, and lead to exploits such as the theft of personal information, wholesale data exfiltration or full account takeovers.
Unlike automated flood-the-zone, volume-based credential attacks, other API attacks are conducted almost one-to-one and carried out in elusive ways, targeting the distinct vulnerabilities of each API, making them even harder to detect than attacks happening on a large scale. Yet, they’re capable of causing as much, if not more, damage. And they’re becomingg more and more prevalent with APIs being the foundation of modern applications.
Beyond credential stuffing
Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report.
Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts.
The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users.
APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.
An attacker doesn’t even have to necessarily sneak in. For instance, they could sign on to Disney+ as a legitimate user and then poke around the API looking for opportunities to exploit. In one example of a front-door approach, a researcher came across an API vulnerability on the Steam developer site that would allow the theft of game license keys. (Luckily for the company, he reported it—and was rewarded with $20,000.)
Most API attacks are very difficult to detect and defend against since they’re carried out in such a clandestine manner. Because APIs are mostly unique, their vulnerabilities don’t conform to any pattern or signature that would allow common security controls to be enforced at scale. And the damage can be considerable, even coming from a single source. For example, an attacker exploiting a weakness in an API could launch a successful DoS attack with a single request.
Rather than the more common DDoS attack, which floods a target with requests from many sources via a botnet, an API DoS can happen when the attacker manipulates the logic of the API, causing the application to overwork itself. If an API is designed to return, say, 10 items per request, an attacker could change that value to 10 million, using up all of an application’s resources and crashing it—with a single request.
Credential stuffing attacks present security challenges of their own. With easy access to evasion tools—and with their own sophistication improving dramatically – it’s not difficult for attackers to disguise their activity behind a mesh of thousands of IP addresses and devices. But credential stuffing nevertheless is an established problem with established solutions.
How enterprises can improve
Enterprises can scale infrastructure to mitigate credential stuffing attacks or buy a solution capable of identifying and stopping the attacks. The trick is to evaluate large volumes of activity and block malicious login attempts without impacting legitimate users, and to do it quickly, identifying successful malicious logins and alerting users in time to protect them from fraud.
Enterprises can improve API security first and foremost by identifying all of their APIs including data exposure, usage, and even those they didn’t know existed. When APIs fly under security operators’ radar, otherwise secure infrastructure has a hole in the fence. Once full visibility is attained, enterprises can more tightly control API access and use, and thus, enable better security.
As ransomware continues to prove how devastating it can be, one of the scariest things for security pros is how quickly it can paralyze an organization. Just look at Honda, which was forced to shut down all global operations in June, and Garmin, which had its services knocked offline for days in July.
Ransomware isn’t hard to detect but identifying it when the encryption and exfiltration are rampant is too little too late. However, there are several warning signs that organizations can catch before the real damage is done. In fact, FireEye found that there is usually three days of dwell time between these early warning signs and detonation of ransomware.
So, how does a security team find these weak but important early warning signals? Somewhat surprisingly perhaps, the network provides a unique vantage point to spot the pre-encryption activity of ransomware actors such as those behind Maze.
Here’s a guide, broken down by MITRE category, of the many different warning signs organizations being attacked by Maze ransomware can see and act upon before it’s too late.
With Maze actors, there are several initial access vectors, such as phishing attachments and links, external-facing remote access such as Microsoft’s Remote Desktop Protocol (RDP), and access via valid accounts. All of these can be discovered while network threat hunting across traffic. Furthermore, given this represents the actor’s earliest foray into the environment, detecting this initial access is the organization’s best bet to significantly mitigate impact.
T1193 Spear-phishing attachment
|T133 External Remote Services||
|T1078 Valid accounts||
|T1190 Exploit public-facing application||
The execution phase is still early enough in an attack to shut it down and foil any attempts to detonate ransomware. Common early warning signs to watch for in execution include users being tricked into clicking a phishing link or attachment, or when certain tools such as PsExec have been used in the environment.
T1024 User execution
|T1035 Service execution||
|T1028 Windows remote management||
Adversaries using Maze rely on several common techniques, such as a web shell on internet-facing systems and the use of valid accounts obtained within the environment. Once the adversary has secured a foothold, it starts to become increasingly difficult to mitigate impact.
T1100 Web shell
|T1078 Valid accounts||
As an adversary gains higher levels of access it becomes significantly more difficult to pick up additional signs of activity in the environment. For the actors of Maze, the techniques used for persistence are similar to those for privileged activity.
T1100 Web shell
|T1078 Valid accounts||
To hide files and their access to different systems, adversaries like the ones who use Maze will rename files, encode, archive, and use other mechanisms to hide their tracks. Attempts to hide their traces are in themselves indicators to hunt for.
T1027 Obfuscated files or information
|T1078 Valid accounts||
There are several defensive controls that can be put in place to help limit or restrict access to credentials. Threat hunters can enable this process by providing situational awareness of network hygiene including specific attack tool usage, credential misuse attempts and weak or insecure passwords.
T110 Brute force
|T1081 Credentials in files||
Maze adversaries use a number of different methods for internal reconnaissance and discovery. For example, enumeration and data collection tools and methods leave their own trail of evidence that can be identified before the exfiltration and encryption occurs.
T1201 Password policy discovery
|T1018 Remote system discovery
T1087 Account discovery
T1016 System network configuration discovery
T1135 Network share discovery
T1083 File and directory discovery
Ransomware actors use lateral movement to understand the environment, spread through the network and then to collect and prepare data for encryption / exfiltration.
T1105 Remote file copy
T1077 Windows admin shares
|T1076 Remote Desktop Protocol
T1028 Windows remote management
T1097 Pass the ticket
In this phase, Maze actors use tools and batch scripts to collect information and prepare for exfiltration. It is typical to find .bat files or archives using the .7z or .exe extension at this stage.
T1039 Data from network share drive
Command and control (C2)
Many adversaries will use common ports or remote access tools to try and obtain and maintain C2, and Maze actors are no different. In the research my team has done, we’ve also seen the use of ICMP tunnels to connect to the attacker infrastructure.
T1043 Common used port
T1071 Standard application layer protocol
|T1105 Remote file copy||
|T1219 Remote access tools||
At this stage, the risk of exposure of sensitive data in the public realm is dire and it means an organization has missed many of the earlier warning signs—now it’s about minimizing impact.
T1030 Data transfer size limits
|T1048 Exfiltration over alternative protocol||
|T1002: Data compressed||
Ransomware is never good news when it shows up at the doorstep. However, with disciplined network threat hunting and monitoring, it is possible to identify an attack early in the lifecycle. Many of the early warning signs are visible on the network and threat hunters would be well served to identify these and thus help mitigate impact.
The ongoing debate surrounding privacy protection in the global data economy reached a fever pitch with July’s “Schrems II” ruling at the European Court of Justice, which struck down the Privacy Shield – a legal mechanism enabling companies to transfer personal data from the EU to the US for processing – potentially disrupting the business of thousands of companies.
The plaintiff, Austrian privacy advocate Max Schrems, claimed that US privacy legislation was insufficiently robust to prevent national security and intelligence authorities from acquiring – and misusing – Europeans’ personal data. The EU’s top court agreed, abolishing the Privacy Shield and requiring American companies that exchange data with European partners to comply with the standards set out by the GDPR, the EU’s data privacy law.
Following this landmark ruling, ensuring the secure flow of data from one jurisdiction to another will be a significant challenge, given the lack of an international regulatory framework for data transfers and emerging conflicts between competing data privacy regulations.
This comes at a time when the COVID-19 crisis has further underscored the urgent need for collaborative international research involving the exchange of personal data – in this case, sensitive health data.
Will data protection regulations stand in the way of this and other vital data sharing?
The Privacy Shield was a stopgap measure to facilitate data-sharing between the US and the EU which ultimately did not withstand legal scrutiny. Robust, compliant-by-design tools beyond contractual frameworks will be required in order to protect individual privacy while allowing data-driven research on regulated data and business collaboration across jurisdictions.
Fortunately, innovative privacy-enhancing technologies (PETs) can be the stable bridge connecting differing – and sometimes conflicting – privacy frameworks. Here’s why policy alone will not suffice to resolve existing data privacy challenges – and how PETs can deliver the best of both worlds:
A new paradigm for ethical and secure data sharing
The abolition of the Privacy Shield poses major challenges for over 5,000 American and European companies which previously relied on its existence and must now confront a murky legal landscape. While big players like Google and Zoom have the resources to update their compliance protocols and negotiate legal contracts between transfer partners, smaller innovators lack these means and may see their activities slowed or even permanently halted. Privacy legislation has already impeded vital cross-border research collaborations – one prominent example is the joint American-Finnish study regarding the genomic causes of diabetes, which “slowed to a crawl” due to regulations, according to the head of the US National Institutes of Health (NIH).
One response to the Schrems II ruling might be expediting moves towards a federal data privacy law in the US. But this would take time: in Europe, over two years passed between the adoption of GDPR and its enforcement. Given that smaller companies are facing an immediate legal threat to their regular operations, a federal privacy law might not come quickly enough.
Even if such legislation were to be approved in Washington, it is unlikely to be fully compatible with GDPR – not to mention widening privacy regulations in other countries. The CCPA, the major statewide data protection initiative, is generally considered less stringent than GDPR, meaning that even CCPA-compliant businesses would still have to adapt to European standards.
In short, the existing legislative toolbox is insufficient to protect the operations of thousands of businesses in the US and around the world, which is why it’s time for a new paradigm for privacy-preserving data sharing based on Privacy-Enhancing Technologies.
The advantages of privacy-enhancing technologies
Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards.
One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.
Jim Halpert, a data regulation lawyer who helped draft the CCPA and is Global Co-Chair of the Data Protection, Privacy and Security practice at DLA Piper, views certain solutions based on HE as effective compliance tools.
“Homomorphic Encryption encrypts data elements in such a way that they cannot identify, describe or in any way relate to a person or household. As a result, homomorphically encrypted data cannot be considered ‘personal information’ and is thus exempt from CCPA requirements,” Halpert says. “Companies which encrypt data through HE minimize the risk of legal threats, avoid CCPA obligations, and eliminate the possibility that a third party could mistakenly be exposed to personal data.”
The same principle applies to GDPR, which requires any personally identifiable information to be protected.
HE is applicable to any industry and activity which requires sensitive data to be analyzed by third parties; for example, research such as genomic investigations into individuals’ susceptibility to COVID-19 and other health conditions, and secure data analysis in the financial services industry, including financial crime investigations across borders and institutions. In these cases, HE enables users to legally collaborate across different jurisdictions and regulatory frameworks, maximizing data value while minimizing privacy and compliance risk.
PETs will be crucial in allowing data to flow securely even after the Privacy Shield has been lowered. The EU and the US have already entered negotiations aimed at replacing the Privacy Shield, but while a palliative solution might satisfy business interests in the short term, it won’t remedy the underlying problems inherent to competing privacy frameworks. Any replacement would face immediate legal challenges in a potential “Schrems III” case. Tech is in large part responsible for the growing data privacy quandary. The onus, then, is on tech itself to help facilitate the free flow of data without undermining data protection.
Bruce Schneier coined the phrase security theater to describe “security measures that make people feel more secure without doing anything to actually improve their security.” That’s the situation we still face today when it comes to defending against cyber security risks.
The insurance industry employs actuaries to help quantify and manage the risks insurance underwriters take. The organizations and individuals that in-turn purchase insurance policies also look at their own biggest risks and the likelihood they will occur and opt accordingly for various deductibles and riders.
Things do not work the same way when it comes to cyber security. For example: Gartner observed that most breaches are the result of a vulnerability being exploited. Furthermore, they estimate that 99% of vulnerabilities exploited are already known by the industry and not net-new zero-day vulnerabilities.
How is it possible that well known vulnerabilities are a significant conduit for attackers when organizations collectively spend at least $1B on vulnerability scanning annually? Among other things, it’s because organizations are practicing a form of security theater: they are focusing those vulnerability scanners on what they know and what is familiar; sometimes they are simply attempting to fulfill a compliance requirement.
While there has been a strong industry movement towards security effectiveness and productivity, with approaches favoring prioritizing alerts, investigations and activities, there are still a good number of security theatrics carried out in many organizations. Many simply continue conducting various security processes and maintaining security solutions that may have been valuable at one time, but now don’t address the right concerns.
Broaching a concern such as security theater with security professionals can result in defensiveness or ire from disturbing a well-established process, or worse, practitioners assuming there is some implied level of foolishness or ineptitude. Rather than lambasting security theater practices outright, a better approach is to systematically consider what gaps may exist in your organization’s security posture. Part of this exercise requires asking yourself what you don’t know. That might seem like an oxymoron: how does one know what one does not know?
The idea of not knowing what you don’t know is a topic that frequently turns up on CISOs’ list of reasons that “keep them up at night.” The challenge with this type of security issue is less about swiftly applying software patches or assessing vulnerabilities of identified infrastructure. Here the main concern is to identify what might be completely unaddressed: is there some aspect of the IT ecosystem that is unprotected or could serve as an effective conduit to other resources? The question is basically, “What have we overlooked?” or “What asset or business system might be completely unknown, forgotten or not under our control?” The issue is not about the weakness of the known attack surface. It’s about the unknown attack surface that is not protected.
Sophisticated attackers are adept at developing a complete picture of an organization’s entire attack surface. There are numerous tools, techniques and even hacking services that can help attackers with this task. Most attackers are pragmatic and even business-oriented, and their goal is to find the path of least resistance that will provide the greatest payoff. Often this means focusing on the least monitored and least protected part of an organization’s attack surface.
Attackers are adept at finding internet-exposed, unprotected assets or systems. Often these are forgotten or unknown assets that are both an easy entrance to a company’s network as well as valuable in their own right. The irony is that attackers therefore often have a truer picture of an attack surface than the security team charged with defending it.
Interestingly, a security organizations’ effectiveness is often diminished by its own constraints, because theteam will focus on what they know they need to protect along with the established processes for doing that. Attackers have no such constraints. Rather than following prescribed rules or management by tradition, attackers will first perform reconnaissance and pursue intelligence to find the places of greatest weakness. Attackers look for these unprotected spots and favor them over resources that are actively monitored and defended.
Security organizations, on the other hand, typically start and end their assessments with their known assets. Security theater has them devoting too much focus to the known and not enough on the unknown.
Even well-established practices such as penetration testing, vulnerability assessment and security ratings result in security theater because they revolve around what is known. To move beyond theatrics into real effectiveness, security teams need to develop new processes to uncover the unknowns that are part of their IT ecosystem. That is exactly what attackers target. Few organizations are able to do this type of discovery and detection today. It is not viable either because of the existing workload or level of expertise needed to do a complete assessment. In addition, it is common for bias based on the pre-existing perceptions of the organization’s security posture to influence the search for the previously unknown.
The process of discovering previously unknown, exposed assets should be done on a regular basis. Automating this process—particularly due to the range of cloud, partner and subsidiary IT that must be considered—makes it more viable. While automation is necessary, it is still important for fully trained researchers to be involved to tune the process, interpret results and ensure its proper scope.
Adding a continuous process of identifying unknown, uncontrolled or abandoned assets and systems not only helps close gaps, but it expands the purview of security professionals to focus on not just what they know, but to also start considering what they do not know.
The march towards digital transformation and the increasing volume of cyberattacks are finally driving IT security and network teams towards better collaboration. This idea isn’t new, but it’s finally being put into practice at many major enterprises.
Network traffic analysis and security
The reasons are fairly straightforward: all those new transformation initiatives – moving workloads to the cloud, pursuing virtualization or SD-WAN projects, etc. – create network traffic blind spots that can’t easily be monitored using the security tools and process designed for simpler, on-premise traditional architectures. This result is a series of data and system islands, tool sprawl and lack of correlation. Basically, there is lots of data, but little information. As the organization grows, the problems compound.
For a company hit with a cyber attack, the final cost can be astronomical as it includes investigation and mitigation costs, costs tied to legal exposure, insurance hikes, the acquisition of new tools, the implementation of new policies and procedures, and the hit to revenues and reputation.
Size doesn’t matter – all companies are vulnerable to an attack. To improve organizational security postures in this new hybrid network environment, Security Operations (SecOps) and Network Operations (NetOps) teams are becoming fast friends. In fact, Gartner has recently changed the name of one of their market segments from “Network Traffic Analysis” to “Network Detection and Response” to reflect the shift in demand for more security-focused network analysis solutions.
Here are four ways that network data in general and network traffic analysis in particular can benefit the SecOps team at the Security Operations Center (SOC) level:
1. Enabling behavioral-based threat detection
Signature-based threat detection, as found in most antivirus and firewall solutions, is reactive. Vendors create signatures for malware as they appear in the wild or license them from third-party sources like Google’s VirusTotal, and update their products to recognize and protect against the threats.
While this is a useful way to quickly screen out all known dangerous files from entering a network, the approach has limits. The most obvious is that signature-based detection can’t catch new threats for which no signature exists. But more importantly, a growing percentage of malware is obfuscated to avoid signature-based detection. Research by network security company WatchGuard Technologies found that a third of all malware in 2019 could evade signature-based antivirus, and that number spiked to two-thirds in Q4 2019. These threats require a different detection method.
Network traffic analysis (also known as network detection and response, or NDR) uses a combination of advanced analytics, machine learning (ML) and rule-based detection to identify suspicious activities throughout the network. NDR tools consume and analyze raw traffic, such as packet data, to build models that reflect normal network behavior, then raise alerts when they detect abnormal patterns.
Unlike signature-based solutions, which typically focus on keeping malware out of the network, most NDR solutions can go beyond north-south traffic to also monitor east-west traffic, as well as cloud-native traffic. These capabilities are becoming increasingly important as businesses go virtual and cloud-first. NDR solutions thus help SecOps detect and prevent attacks that can evade signature-based detection. To function, these NDR solutions require access to high-quality network data.
2. Providing data for security analytics, compliance and forensics
The SecOps team will often need the network data and behavior insights for security analytics or compliance audits. This will usually require network metadata and packet data from physical, virtual and cloud-native elements of the network deployed across the data center, branch offices and multi-cloud environments.
The easier it is to access, index and make sense out of this data (preferably in a “single pane of glass” solution), the more value it will provide. Obtaining this insight is entirely feasible but will require a mix of physical and virtual network probes and packet brokers to gather and consolidate data from the various corners of the network to process and deliver it to the security tool stack.
NDR solutions can also offer the SecOps team the ability to capture and retain network data associated with indicators of compromise (IOCs) for fast forensics search and analysis in case of an incident. This ability to capture, save, sort and correlate metadata and packets allows SecOps to investigate breaches and incidents after the fact and determine what went wrong, and how the attack can be better recognized and prevented in the future.
3. Delivering better network visibility for better security automation
Qualified security professionals are rare, and their time is extremely valuable. Automating security tasks can help businesses resolve incidents more quickly and free up time for the SecOps team to focus on more important tasks. Unfortunately, visibility and automation only work as well as the quality and granularity of the data – and both too little and too much can be a problem.
Too little data, and the automated solutions are just as blind as the SecOps team. Too much data, in the form of a threat detection system that throws out too many alerts, can result in a “boy who cries wolf” scenario with the automated responses shutting down accounts or workloads and doing more harm than good.
Missing data, too many alerts or inherent blind spots can mean that the machine learning and analytical models that NDR relies on will not work correctly, producing false positives while missing actual threats. In the long run, this means more work for the SOC team.
The key to successful automation is to have high-quality network data to enable accurate security alerts, so responses can be automated.
4. Decreasing malware dwell time
NDR solutions typically have little to no blocking ability because they are generally not deployed inline (although that choice is up to the IT teams). But even so, they are effective in shortening the incident response window and reducing the dwell time of malware by quickly identifying suspect behavior or traffic. Results from NDR tools can be fed into downstream security tools that can verify and remediate the threats.
Malware dwell time has been steadily decreasing across the industry; the 2019 Verizon Data Breach Investigation Report (DBIR) found that 56% of breaches took months or longer to detect, but the 2019 Verizon Data Breach Investigation Report2020 DBIR found that 81% of data breaches were contained in days or less. This is an encouraging statistic and hopefully SecOps teams will continue partnering with NetOps to reduce it even further.
The benefits of network detection and response or network traffic analysis go far beyond the traditional realm of NetOps. By cooperating, NetOps and SecOps teams can create a solid visibility architecture and practice that strengthens their security posture, leaving organizations well prepared if an attack takes place.
Full network visibility allows security teams to see all the relevant information through a security delivery layer, use behavioral-based or automated threat detection methods, and be able to capture and store relevant data for deep forensics to investigate and respond to any incident.
Among the individuals charged with protecting and improving a company’s cybersecurity, the CISO is typically seen as the executive for the job. That said, the shift to widespread remote work has made a compelling case for the need to bring security within the remit of other departments.
The pandemic has torn down physical office barriers, opening businesses up to countless vulnerabilities as the number of attack vectors increased. The reality is that every employee is a potential vulnerability and, with the security habits of workers remaining questionable even amid a rising number of data breaches, it’s never been more important to foster a culture of security throughout an organization.
Improving security with culture
We continue to see different data breaches in the news, with hundreds of millions of users on Instagram, TikTok and YouTube having their accounts compromised in the latest breach. These instances, and countless others, are a testament to the critical importance of strong security behaviors – both at work and home – and the training and attentiveness they require.
The shared responsibility in security is closely tied to how employees at all levels perceive the importance of security. If this is ingrained within the culture, they will have the abilities and tools to protect themselves. This is, of course, easier said than done.
Creating and maintaining a security culture is a never ending and constantly evolving mission and influencing people’s behavior is often the most challenging part of the effort. People have become numb to the security threats they face, and although they understand the potential risks, they don’t do anything about it. For example, recent research revealed that 92 percent of UK workers know that using the same password over and over is risky, but 64 percent of the respondents do it anyway. So, how do we get through that dissonance and get people engaged in security?
Encouraging cyber-secure practices from the top
As security continues to grow in importance, organizations absolutely need an executive at the top to vocally and adamantly advocate for security.
CISOs typically lead this charge. They are often tasked with leading a security team and a program responsible for protecting all information assets, as well as ensuring disaster recovery, business continuity and incident response plans are in place and regularly tested. In addition, CISOs and their teams are usually responsible for evaluating new technologies, staying updated on compliance regulations, overseeing identity and access management, communicating risks and security strategies to the C-suite and providing trainings.
Today, CISOs are also focused on protecting a highly distributed workforce and customers – in offices, at home or a mix of both – and meeting the new security challenges and threats that come along with this hybrid environment. That’s why it’s more important than ever for other C-suite executives to help promote and drive the organization’s security culture – especially through communications, training and enforcement of best practices.
While CISOs continue to spearhead the development of the organization’s security program and define the security mission and culture, other C-suite executives can vocally support these programs to ensure their integrity throughout the whole process, from vision and development to implementation and ongoing enforcement. The participation of the C-suite can also help CISOs focus on the most important security issues and adjust the program to ensure it is aligned with broader business plans and strategies, thereby helping to get broader support without compromising security.
One likely companion for this type of cross-department alignment is the Chief Operating Officer (COO). As this role typically reports directly to the CEO and is considered to be second in the chain of command, the COO will be able to provide the authority needed to advocate for security and how it can impact employees, customers, products and ultimately the business. This means a good COO today needs to encourage a business culture that supports security efforts thoroughly, while also ensuring security is prioritized at a tactical level.
However, the COO is not the only one that needs to serve as a security advocate. All C-level executives have a critical role to play in establishing a strong security culture. Because of their connections to different stakeholders, they will be able to share diverse insights.
For example, the COO can better incorporate input from the board, which is vital to ensuring the CISO understands the company’s risk tolerance which will directly impact innovation and revenue. The Chief Financial Officer (CFO) could share insights into the spending priorities and various obligations needed to protect financial systems and the Chief Human Resources Manager (CHRM) could get valuable data from employees. The CHRM is instrumental when driving the development of the security culture; their level of engagement often determines the overall success of developing a successful security-conscious culture.
Security-conscious C-suite executives will be able to step in to support the CISO’s mission that security needs to be a top priority.
Having model behavior fed from the very top will help to underline an organization’s collective commitment to cybersecurity. In doing so, employees are empowered by a sense of shared responsibility around their role in keeping a company’s corporate data secure. To this end, it’s crucial that the C-suite of modern companies are trailblazers of security, particularly in the current landscape.
The techniques employed by cybercriminals are becoming more and more sophisticated, and the risk of data breaches and stolen information being offered for sale on the dark web has never been higher. As the pandemic continues to influence developments in information security, senior leadership, middle management and junior staff members must all work together towards a collective aim of securing their workplace.
Fostering a culture of security awareness is by no means an easy feat, but the long-term gains outweigh any teething issues and will serve to make businesses watertight in the midst of a growing threat landscape.
Compliance is probably one of the dullest topics in cybersecurity. Let’s be honest, there’s nothing to get excited about because most people view it as a tick-box exercise. It doesn’t matter which compliance regulation you talk about – they all get a collective groan from companies whenever you start talking about it.
The thing is, compliance requirements are often being poorly written, vague and confusing. In my opinion, the confusion around compliance comes from the writing, so it’s no surprise companies are struggling, especially when they have to comply with multiple requirements simultaneously.
Poor writing is smothering compliance regulations
Take ISO 27001 as an example. Its goal is to improve a business’ information security management and its process has six-parts, which include commands like “conduct a risk assessment”, “define a security policy” and “manage identified risks”. The requirements for each of these commands are extremely vague and needlessly subjective.
The Sarbanes-Oxley Act (SOX), which covers all businesses in the United States, is no better. Section 404 vaguely says that all publicly traded organizations have to demonstrate “due diligence” in the disclosure of financial information, but then it does not explain what “due diligence” means.
The Gramm-Leach-Bliley Act (GLBA) requires US financial institutions to explain information-sharing practices to their customers. It says financial organizations have to “develop a written information security plan”, but then doesn’t offer any advice on how to achieve that.
Even Lexcel (an accreditation indicating quality in relation to legal practice management standards) in the United Kingdom, which is written by lawyers for lawyers, is not clear: “Practices must have an information management policy with procedures for the protection and security of the information assets.”
For a profession that prides itself on being able to maintain absolute clarity, I’m surprised Lexcel allows this type of subjectivity in its compliance requirements.
It’s not easy to write for such a wide audience
Look, I understand. It’s a pretty tricky job to write compliance requirements. It needs to be applicable to all organizations within a particular field, each of which will have their differences in the way they conduct business and how they’ve set up their technological infrastructure.
Furthermore, writers are working against the clock with compliance requirements. IT regulations are changing at such a quick pace that the requirements they write today might be out of date tomorrow.
However, I think those who write requirements should take the Payment Card Industry Data Security Standard (PCI DSS) as an example. The PCI DSS applies to all organizations that store cardholder data and the requirements are clear, regularly updated, and you can find everything you need in one place.
The way PCI DSS compliance is structured (in terms of requirement, testing procedures and guidance) is a lot clearer than anything else I’ve seen. It contains very little room for subjectivity, and you know exactly where you stand with it.
The GDPR is also pretty well written and detailed. The many articles referring to data protection are specific, understandable and implementable.
For example, when it comes to data access, this sentence is perfectly clear: “Unauthorized access also includes accidental or unlawful destruction, loss, alteration, or unauthorized disclosure of personal data transmitted, stored or otherwise processed” (Articles 4, 5, 23 and 32).
It’s also very clear when it comes to auditing processes: “You need to maintain a record of data processing activities, including information on ‘recipients to whom the personal data have been or will be disclosed’, i.e. whom has access to data” (Articles 5, 28, 30, 39, 47).
So, while you’re faced with many compliance requirements, you need to have a good strategy in place. However, it can get complex when you’re trying to comply with multiple mandates. If I can give you one tip, it is to find the commonalities between all of them, before coming up with a solution.
You need to do the basics right
In my opinion, the confusing nature of compliance only spawns the relentless bombardment of marketing material from vendors on “how you can be compliant with X” or the “top five things you need to know about Y”.
You have to understand that at the core of any compliance mandate is the desire to keep protected data secure, only allowing access to those who need it for business reasons. This is why all you need to do with compliance is to start with the basics: data storage, file auditing and access management. Get those right, and you’re on your way to demonstrating your willingness to comply.
Insider threats can take many forms, from the absent-minded employee failing to follow basic security protocols, to the malicious insider, intentionally seeking to harm your organization.
Some threats may stem from a simple mistake, others from a personal vendetta. Some insiders will work alone, others at the behest of a competitor or nation-state.
Whatever the method and the motives, the results can be devastating. The average cost of a single negligent insider incident exceeds $300k. That figures increases to over $755k for a criminal or malicious attack and up to $871k for one involving credential theft.
Unlike many other common attacks, insider attacks are rarely a smash-and-grab. The longer a threat goes undetected, the more damage it can do to your organization. The better you understand your people – their motivations, and their relationship with your data and networks – the earlier you can detect and contain potential threats.
Insider threats can be loosely split into two categories – negligent and malicious. Within those categories are a range of potential drivers.
As the mechanics of an attack can differ significantly depending on its motives, gaining a thorough understanding of these drivers can be the difference between a potential threat and a successful breach.
Financial gain is perhaps the most common driver for the malicious insider. Employees across all levels are aware that corporate data and sensitive information has value.
To an employee with access to your data, allowing it to fall into the wrong hands can seem like minimal risk for significant reward.
This is another threat that is likely higher risk in the current environment. The coronavirus pandemic has placed millions of people under financial pressure, with many furloughed or facing job insecurity. What once seemed an unimaginable decision, may now feel like a quick solution.
Negligence is the most common cause of insider threats, costing organizations an average of $4.58 million per year.
Such a threat usually results from poor security hygiene – a failure to properly log in/out of corporate systems, writing down or reusing passwords, using unauthorized devices or applications, and a failure to protect company data.
Negligent insiders are often repeat offenders who may skirt round security for greater speed, increased productivity or just convenience.
A distracted employee could fall into the “negligent” category. However, it is worth highlighting separately as this type of threat can be harder to spot.
Where negligent employees may raise red flags by regularly ignoring security best practices, the distracted insider may be a model employee until the moment they make a mistake.
The risk of distraction is potentially higher right now, with most employees working remotely, many for the first time, often interchanging between work and personal applications. Outside of the formal office environment and distracted by home life, they may have different work patterns, be more relaxed and inclined to click on malicious links or bypass formal security conventions.
Some malicious insiders have no interest in personal gain. Their sole driver is harming your organization.
The headlines are full of stories about the devastating impact of data breaches. For anyone wishing to damage an organization’s reputation or revenues, there is no better way in the digital world than by leaking sensitive customer data.
Insiders with this motivation will usually have a grievance against your business. They may have been looked over for a pay rise or promotion, or recently subject to disciplinary action.
Espionage and sabotage
Malicious insiders do not always work alone. In some cases, they may be passing information to a third-party such as a competitor or a nation-state.
Such cases tend to fall under espionage or sabotage. This could mean a competitor recruiting a plant in your organization to syphon out intellectual property, R&D, or customer information to gain an edge, or a nation-state looking for government secrets or classified information to destabilize another.
Cases like these are on the increase in recent years. Hackers and plants from Russia, China, and North Korea are regularly implicated in cases of corporate and state-sponsored insider attacks against Western organizations.
Defending from within
Just as they affect method, motives also dictate the appropriate response. An effective deterrent against negligence is unlikely to deter a committed and sophisticated insider intent on causing harm to your organization.
That said, the foundation for any defense is comprehensive controls. You must have total visibility of your networks – who is using them and what data they are accessing. These controls should be leveraged to limit sensitive information to only the most privileged users and to strictly limit the transfer of data from company systems.
With this broad base in place, you can now add further layers to counter specific threats. To protect against disgruntled employees, for example, additional protections could include filters on company communications to flag high-risk vocabulary, and specific controls applied to high-risk individuals, such as those who have been disciplined or are soon to be leaving the company.
Finally, any successful defense against insider threats should have your people at its heart.
You must create a strong security culture. This means all users must be aware of how their behavior can unintentionally put your organization at risk. All must know how to spot early signs of potential threats, whatever the cause. And all must be aware of the severe consequences of intentionally putting your organization in harm’s way.
Picture this: An email comes through, offering new COVID-19 workplace safety protocols, and an employee, worn down by the events of the day or feeling anxious about their safety, clicks through. In a matter of seconds, the attacker enters the network. Factor in a sea of newly remote workers and overloaded security teams, and it’s easy to see how COVID-19 has been a boon for cybercriminals.
Cracks in cyber defenses
The global pandemic has exposed new cracks in organizations’ cyber defenses, with a recent Tenable report finding just under half of businesses have experienced at least one “business impacting cyber-attack” related to COVID-19 since April 2020. For the most part, COVID-19 has exacerbated pre-existing cyberthreats, from counter incident response and island hopping to lateral movement and destructive attacks. Making matters worse, today’s security teams are struggling to keep up.
A survey of incident response (IR) professionals found that 53% encountered or observed a surge in cyberattacks exploiting COVID-19, specifically pointing to remote access inefficiencies (52%), VPN vulnerabilities (45%) and staff shortages (36%) as the most daunting endpoint security challenges.
VPNs, which many organizations rely on for protection, have become increasingly vulnerable and it may be cause for concern that the average update cycle for software patches tends to generally occur on a weekly basis, with very few updating daily. While these updates might seem frequent, they might not be enough to protect your information, primarily due to the explosion of both traditional and fileless malware.
As for vulnerabilities, IR professionals point to the use of IoT technologies, personal devices like iPhones and iPads, and web conferencing applications, all of which are becoming increasingly popular with employees working from home. Last holiday season, the number one consumer purchase was smart devices. Now they’re in homes that have become office spaces.
Cybercriminals can use those family environments as a launchpad to compromise organizations. In other words, attackers are still island hopping, but instead of starting from one organization’s network and moving along the supply chain, the attack may now originate in home infrastructures.
Emerging attacks on the horizon
Looking ahead, we’ll continue to see burgeoning geopolitical tensions, particularly as we near the 2020 presidential election. These tensions will lead to a rise in destructive attacks.
Moreover, organizations should prepare for other emerging attack types. For instance, 42% of IR professionals agree that cloud jacking will “very likely” become more common in the next 12 months, while 34% said as much of access mining. Mobile rootkits, virtual home invasions of well-known public figures and Bluetooth Low Energy attacks are among the other attack types to prepare for in the next year.
These new methods, in tandem with a surge in counter IR, destructive attacks, lateral movement and island hopping, make for a perilous threat landscape. But with the right tools, strategies, collaboration and staff, security teams can handle the threat.
Best practices for a better defense of data
As the initial shock of COVID-19 subsides, we should expect organizations to firm up their defenses against new vulnerabilities, whether it’s addressing staff shortages, integrating endpoint technologies, aligning IT and security teams or adapting networks and employees to remote work. The following five steps are critical in order to fight back against the next generation of cyber attacks:
- Gain better visibility into your system’s endpoints – This is increasingly important in today’s landscape, with more attackers seeking to linger for long periods on a network and more vulnerable endpoints online via remote access.
- Establish digital distancing practices – People working from home should have two routers, segmenting traffic from work and home devices.
- Enable real-time updates, policies and configurations across the network – This may include updates to VPNs, audits or fixes to configurations across remote endpoints and other security updates.
- Remember to communicate – about new risk factors (spear phishing, smart devices, file-sharing applications, etc.), protocols and security resources.
- Enhance collaboration between IT and security teams – This is especially true under the added stress of the pandemic. Alignment should also help elevate IT personnel to become experts on their own systems.
Hackers continue to exploit vulnerable situations, and the global disruption brought on by COVID-19 is no different. Organizations must now refocus their defenses to better protect against evolving threats as workforces continue to shift to the next “normal” and the threat landscape evolves.
Another month has passed working from home and September Patch Tuesday is upon us. For most of us here in the US, September usually signals back to school for our children and with that comes a huge increase in traffic on our highways. But I suspect with the big push for remote learning from home, those of us in IT may be more worried about the increase in network traffic. So, should we expect a large number of updates this Patch Tuesday that will bog down our networks?
The good news is that I expect a more limited release of updates from Microsoft and third-party vendors this month. In August, we saw a HUGE set of updates for Office and also an unexpected .NET release after just having one in July.
Also looking back to last month, there were some reported issues on the Windows 10 version 1903, 1909, and 2004 updates. Applying the updates for KB 4565351 or KB 4566782 resulted in a failure for many users on automatic updates with return codes/explanations that were not very helpful. Let’s hope the updates are more stable this month without the need to re-apply, or worse, redistribute these large updates across our networks using even more bandwidth.
Last month I talked about software end-of-life (EOL) and making sure you had a plan in place to properly protect your systems in advance. Just as an early reminder we have the EOL of Windows Embedded Standard 7 coming up on October Patch Tuesday. Microsoft will offer continued Extended Security Updates (ESUs) for critical and important security updates just like they did for Windows 7 and Server 2008.
These updates will be available for three years through October 2023. Microsoft also provided an update on the ‘sunset’ of the legacy Edge browser in March 2021 along with the announcement that Microsoft 365 apps and services will no longer support IE 11 starting in August 2021. They made it clear IE 11 is not going away anytime soon, but the new Edge is required for a modern browser experience. These changes are all still a few months out but plan accordingly.
September 2020 Patch Tuesday forecast
- We’ll see the standard operating system updates, but as I mentioned earlier, with the large Office and individual application updates release last month expect both smaller and more limited set this time.
- Service stack updates (SSUs) are hit or miss each month. The last required update was released in May. Expect to see a few in the mix once again.
- A security update for Acrobat and Reader came out last Patch Tuesday. There are no pre-announcements on their web site so we may see a small update, if any.
- Apple released security updates last month for iTunes and iCloud, so we should get a break this month if they maintain their quarterly schedule.
- Google Chrome 85 was released earlier week, but we may see a security release if they have any last-minute fixes for us.
- We’re due for a Mozilla security update for Firefox and Thunderbird. The last security release was back on August 25.
Remote security management of both company-provided and user-attached systems provides many challenges. With a projected light set of updates this month, hopefully tying up valuable bandwidth isn’t one of those challenges.
The 2020 United States presidential election is already off to a rocky start. We’ve seen technology fail in the primary elections, in-person campaigning halted, and a plethora of mixed messages on how voting will actually take place. Many Americans are still uncertain where or how they will vote in November – or worse, they’re unsure if their vote will be tabulated correctly.
For most of us, voting by anything other than a paper ballot or a voting machine is a foreign concept. Due to the pandemic and shelter in place restrictions, various alternatives have been considered this year — in particular, voting via our mobile devices.
On paper, it might seem like COVID-19 has created the ideal opportunity to introduce voting options that utilize the millions of mobile phones and tablets in U.S. voters’ hands. The reality is, our country is not ready to utilize this technology in a safe and protected way.
Here are the four things holding back mobile voting:
Testing and scalability
If we have learned anything from the Iowa Caucus app failure, it is that testing for scalability is key. Prior to Election Day, we must confirm that every voter will be able to vote from their mobile device from any location, all at the same time, without the system crashing.
This is no small feat: newly deployed code almost always has faults, and if a voting app has not undergone rigorous testing at scale by now (less than 75 days from Election Day), it is highly unlikely that it could be sufficiently tested and distributed in time.
Verification and secret ballots
Tying an identity to a user and phone negates the concept of an anonymized ballot, something we’re entitled to as eligible voters. If the vote is cast via a mobile device — especially if there is some way of reconciling the paper ballot back to the electronic vote — then there has to be an identity key that is used to correlate them.
Verifying the identity of the voter and their device and doing it in a way that also allows for secret ballots is a critical challenge to overcome if mobile voting is ever to become a reality.
Even if the kinks in mobile voting are worked out, how can we ensure overall trust in the system? Not only do we need to trust that our vote was cast, but that it was cast in a way that is private, secure, and for the person it was intended. If there is no reconciliation with the paper ballot, how are any risk-limiting audits conducted? Without an auditable system, it is impossible to win the trust of the electorate, which is an absolute necessity ahead of a process as integral to our country as voting.
QR code risks
Chances are, voters would be directed to a voting website via a QR code. While the reliance on distributed ledger technology — even with a cryptographic signature that is highly resistant to alteration — provides a strong method of recording and tabulating votes, it is still not cyber-invincible.
QR codes are not “readable” by humans. Therefore, the ability to alter a QR code to point to an alternative resource without being detected is simple and highly effective. The target of the QR code could result in compromise of credentials, phishing, and malicious code downloads.
Most significantly in this scenario, the QR code could redirect the voter to a site where their vote is captured, altered, returned to the device or forwarded on to the actual site, and when the voter signs the affidavit and submits their vote, it may or may not be for who they actually intended to vote.
Ultimately, the most important thing we can do this election is vote — vote by mail, vote in person, vote early, and vote in a way that you can be sure your vote will be counted for the candidate for whom you intended to vote. However, the idea that we’ll be able to safely via our mobile devices — at least this time around — is nothing but a pipe dream. Until we work out the security and privacy concerns associated with mobile voting, we’re going to have to stick to traditional methods.
In August 2019, cybersecurity researchers revealed that a hacker group known as Sea Turtle targeted 40 telecoms, internet service providers, domain registrars and government organizations in the Middle East and North Africa. The attackers hijacked the domain names of ministries of foreign affairs, intelligence/military agencies and energy-related groups in those regions. As a result, Sea Turtle was able to intercept all internet data – including email and web traffic – sent to the victims. Then, … More
The post Safe domain: How to protect your enterprise from DNS hijacking appeared first on Help Net Security.
The information security industry frequently utilizes the phrase “people, processes and technology” (PPT) to describe a holistic model of securing the business.
But though this phrase is repeated ad nauseum, we seem to have forgotten one of those three primary pillars: people.
In an effort to secure things technically, we prioritize the protection of our processes and technology, while ignoring a critical element to both the success and security of organizations. While it is common sense to prioritize humans – our first line of defense against cyberattacks – too often we only focus on processes and technology, leaving a significant part of our environment dangerously exposed.
Forgetting the people of the PPT approach is like operating a car without airbags. Perhaps you cannot physically see the hazardous gap, but the drive will be incredibly unsafe.
How do we mitigate this gap? By recognizing that people matter. In the information security domain, we place extensive premiums on the focus of the technical, which leads us to neglect humanism, soft skills and the human capital of the business.
Avoid disempowering your staff
Security professionals often describe humans within the cybersecurity space as the weakest link of the system. Security staff often use this phrase to describe everyone but themselves, which does little to enable trust between internal teams or to encourage collaboration among cross-functional groups. In addition, it cultivates an “us versus them” mentality, which is damaging to professional relationships and the success of our information security programs.
Even if people are the element most susceptible to phishing attempts, or the link most likely to be negligent in security practices, it becomes incredibly difficult to foster a culture of security awareness if we demoralize or denigrate the individuals we need to help drive our security priorities.
How does a security team avoid disempowering fellow employees? The solution is quite simple: be aware of the words and phrases you use to describe the people of the PPT model. Develop trust by utilizing positive language during communication and approaching all staff with respect when informing them that security is the responsibility of all employees. You will more effectively keep the attention of staff when you demonstrate that you respect them and indicate that you view them as a primary element of keeping the organization secure.
Steer clear of “My way or the highway!”
The stress of constant security incidents and continuous fear of potential data breaches lead many security teams to operate with a rigid, iron-fist management approach. Instead of allowing security to better enable the business, ideas and programs are forced through and collaboration is thrown by the wayside in the name of making our environments more secure.
While this certainly does not make us popular within the workplace, it also contributes to a lack of trust between security and other business functions. Trust is critical to the success of our security paradigm, which means we must take every opportunity possible to ensure that security enables the business. Without trust, the people of our businesses will not follow our security policies, report suspicious activity, or see cybersecurity in the organization as something they are directly responsible for.
Is it possible for security teams to operate in a flexible, and collaborative manner that guarantees the advancement of the security program, while simultaneously not hindering the day to day work of other staff?
Most definitely. And the solution, like the above, is free, and requires no processes or technology. Be open to opposing opinions regarding the implementation of your security project or program. Approach others cooperatively on how the integration of a new security tool or application should be managed. Asking others, candidly, if there is a “better” way to address a security problem is a wonderfully collaborative way to engage within a culture of teamwork.
Those outside the security team may have ingenious approaches to fixing security problems that we may never have thought of – solutions that both mitigate the security issue and don’t hinder the day-to-day work of employees. Acknowledging the skills and expertise of other non-security teams allows us to discover more innovative ways of approaching a security problem.
Continue to implement technical controls but consider implementing another element into your governance model: people matter. This value, though it sounds simple, is an effective way to not only manage security risk at an acceptable level, but also to ensure that we cultivate our security models holistically.
Where there’s money, there’s also an opportunity for fraudulent actors to leverage security flaws and weak entry-points to access sensitive, personal consumer information.
This has caused a sizeable percentage of consumers to avoid adopting mobile banking completely and has become an issue for financial institutions who must figure out how to provide a full range of financial services through the mobile channel in a safe and secure way. However, with indisputable demand for a mobile-first experience, the pressure to adapt has become unavoidable.
In order to offer that seamless, omnichannel experience consumers crave, financial institutions have to understand the malicious actors and fraudulent tactics they are up against. Here are a few that have to be on the mobile banking channel’s radar.
1. Increased device usage sparks surge in mobile malware
Banking malware has become a very common mobile threat, even more so now as fraudsters leverage fear and uncertainty surrounding the global pandemic. According to a recent report by Malwarebytes, mobile banking malware has surged over recent months, focused on stealing personal information and using weakened remote connections and mobile devices in a work-from-home environment to gain access to more valuable corporate networks.
The financial burden of a data breach resulting from mobile malware could potentially set organizations back millions of dollars, as well as do some serious damage to customer trust and loyalty.
2. Sacrificing software quality and security by effecting premature product rollouts
Securing mobile is a laborious task that requires mobile app developers to factor in several entities, including device manufacturers, mobile operating system developers, app developers, mobile carriers, and service providers. No platform nor device can be secured in the same way, meaning developers are constantly having to overcome a unique set of challenges in order to reduce the risk of fraudulent activity.
The reality of such a complex ecosystem is that mobile app developers are not always qualified to understand all the risks at play, which leads to unsecured mobile data, connections, and transactions. Additionally, the speed at which the market moves thanks to emerging technologies and innovations creates an added layer of pressure for developers. Lacking the resources and time to properly protect consumers can lead to high-profile attacks where sensitive data is exploited.
3. Vulnerabilities in digital security protocols
At any given time, every entity in the ecosystem described above must have high confidence in the entity on the other side of the transaction to ensure its legitimacy. A lack of digital security protocols like secure sockets layer (SSL) and transport layer security (TLS) in mobile banking apps makes it difficult to establish encrypted links between every entity that ultimately help prevent phishing and man-in-the-middle attacks.
If we continue growing our ecosystem at the current rate, adding to its complexity and connecting more and more third-party services and networks, we can no longer avoid fixing the broken system we have for SSL certificate validation.
4. Unreliable mobile device identification
Another issue at play is device identification. The only way other entities in the ecosystem can recognize a unique device is through device fingerprinting. This is a process through which certain unique attributes of a device – operating system, type and version of web browser, the device’s IP address, etc. – are combined for identification. This information can then be pulled from a database for future fraud prevention purposes and a range of other use-cases.
Data privacy concerns and limited data sharing on devices, however, have weakened the process and reliability of identification. If we do not have enough discrete data points to establish a reliable digital fingerprint, the whole system becomes ineffective.
5. Time to update authentication techniques
Fraudsters are always on the lookout for ways to intercept confidential login information that grants them access to protected accounts. Two-factor authentication (2FA) has become banks’ preferred security method for reliably authenticating users trying to access the mobile channel and staying ahead of cybercriminals.
More often than not, 2FA relies on one-time-passwords (OTPs) delivered by SMS to the account holder upon attempted login. Unfortunately, with phishing – especially via SMS – on the rise, hackers can gain access to a mobile device and OTPs delivered via SMS, and gain access to accounts and authenticate fraudulent transactions.
There are also a number of other tactics – e.g., SIM-swapping – attackers use to gain access to sensitive information and accounts.
6. Lack of industry regulation and standards
Without the establishment of rigorous standards and guidance on online banking security and protecting the end-user, low consumer trust will inhibit mass market acceptance. The Federal Financial Institutions Examination Council (FFIEC) has yet to issue ample guidance on the topic of authentication and identification on mobile devices. Mobile security standards need to be a top priority for regulators, especially as new technologies and mobile malware continue to disrupt the market.
The underlying theme for banks to keep in mind is that trust is a currency they cannot afford to lose in such a competitive financial services market. In the race to provide seamless, omnichannel banking experiences, integrating better security protocols without compromising usability can feel like a constant balancing act. Researching the latest tools and technology as well as building trusted partner relationships with third-party service providers is the only way banks can differentiate themselves in a dynamic security landscape.
As COVID-19 forced organizations to re-imagine how the workplace operates just to maintain basic operations, HR departments and their processes became key players in the game of keeping our economy afloat while keeping people alive.
Without a doubt, people form the core of any organization. The HR department must strike an increasingly delicate balance while fulfilling the myriad of needs of workers in this “new normal” and supporting organizational efficiency. As the tentative first steps of re-opening are being taken, many organizations remain remote, while others are transitioning back into the office environment.
Navigating the untested waters of managing HR through this shift to remote and back again is complex enough without taking cybercrime and data security into account, yet it is crucial that HR do exactly that. The data stored by HR is the easy payday cybercriminals are looking for and a nightmare keeping CISOs awake at night.
Why securing HR data is essential
If compromised, the data stored by HR can do a devastating amount of damage to both the company and the personal lives of its employees. HR data is one of the highest risk types of information stored by an organization given that it contains everything from basic contractor details and employee demographics to social security numbers and medical information.
Many state and federal laws and regulations govern the storage, transmission and use of this high value data. The sudden shift to a more distributed workforce due to COVID-19 increased risks because a large portion of the HR workforce being remote means more and higher access levels across cloud, VPN, and personal networks.
Steps to security
Any decent security practitioner will tell you that no security setup is foolproof, but there are steps that can me taken to significantly reduce risk in an ever-evolving environment. A multi-layer approach to security offers better protection than any single solution. Multiple layers of protection might seem redundant, but if one layer fails, the other layers work fill in gaps.
Securing HR-related data needs to be approached from both a technical and end user perspective. This includes controls designed to protect the end user or force them into making appropriate choices, and at the same time providing education and awareness so they understand how to be good stewards of their data.
Secure the identity
The first step to securing HR data is making sure that the ways in which users access data are both secure and easy to use. Each system housing HR data should be protected by a federated login of some variety. Federated logins use a primary source of identity for managing usernames and passwords such as Active Directory.
When a user uses a federated login, the software utilizes a system like LDAP, SAML, or OAuth to query the primary source of identity to validate the username and password, as well as ensure that the user has appropriate rights to access. This ensures that users only have to learn one username and password and we can ensure that the password complies with organizationally mandated complexity policies.
The next step to credential security is to add a second factor of authentication on every system storing HR data. This is referred to as Multi-factor Authentication (MFA) and is a vital preventative measure when used well. The primary rule of MFA says that the second factor should be something “the user is or has” to be most effective.
This second factor of authentication can be anything from a PIN generated on a mobile device to a biometric check to ensure the person entering the password is, in fact, the actual owner. Both of these systems are easy for end users to use and add very little additional friction to the authentication effort, while significantly reducing the risk of credential theft, as it’s difficult for someone to compromise users’ credentials and steal their mobile device or a copy of their fingerprints.
In today’s world, HR users working from somewhere other than the office is not unusual. With this freedom comes the need to secure the means by which they access data, regardless of the network they are using. The best way to accomplish this is to set up a VPN and ensure that all HR systems are only accessible either from inside of the corporate network or from IPs that are connected to the VPN.
A VPN creates an encrypted tunnel between the end user’s device and the internal network. The use of a VPN protects the user against snooping even if they are using an unsecured network like a public Wi-Fi at a coffee shop. Additionally, VPNs require authentication and, if that includes MFA, there are three layers of security to ensure that the person connecting in is a trusted user.
Next, you have to ensure that access is being used appropriately or that no anomalous use is taking place. This is done through a combination of good logging and good analytics software. Solutions that leverage AI or ML to review how access is being utilized and identify usage trends further increase security. The logging solution verifies appropriate usage while the analysis portion helps to identify any questionable activity taking place. This functions as an early warning system in case of compromised accounts and insider threats.
Comprehensive analytics solutions will notice trends in behavior and flag an account if the user changes their normal routine. If odd activity occurs (e.g., going through every HR record), the system alerts an administrator to delve deeper into why this user is viewing so many files. If it notices access occurring from IP ranges coming in through the VPN from outside of the expected geographical areas, accounts can be automatically disabled while alerts are sent out and a deeper investigation takes place. This are ways to shrink the scope of an incident and reduce the damage should an attack occur.
Secure the user
Security awareness training for end users is one of the most essential components of infrastructure security. The end user is a highly valuable target because they already have access to internal resources. The human element is often considered a high-risk factor because humans are easier to “hack” than passwords or automatic security controls.
Social engineering attacks succeed when people aren’t educated to spot red flags indicating an attack is being attempted. Social engineering attacks are the easiest and least costly option for an attacker because any charismatic criminal with good social skills and a mediocre acting ability can be successful. The fact that this type of cyberattack requires no specialized technical skill expands the potential number of attackers.
The most important step of a solid layered security model is the one that prevent these attacks through education and awareness. By providing end users engaging, thorough, and relevant training about types of attacks such as phishing and social engineering, organizations arm their staff with the tools they need to avoid malicious links, prevent malware or rootkit installation, and dodge credential theft.
No perfect security
No matter where the job gets done, HR needs to deliver effective services to employees while still taking steps to keep employee data safe. Even though an organization cannot control every aspect of how work is getting done, these steps will help keep sensitive HR data safe.
Control over accounts, how they are monitored, and what they are accessing are important steps. Arming the end user directly, with the awareness needed to prevent having their good intentions weaponized, requires a combination of training and controls that create a pro-active system of prevention, early warnings, and swift remediation. There is no perfect security solution for protecting HR data, but multiple, overlapping security layers can protect valuable HR assets without making it impossible for HR employees to do their work.
Internal investigations in corporations are typically conducted by the human resources (HR) department, internal compliance teams, and/or the IT department. Some cases may also require the involvement of outside third parties like forensic experts, consultants, law or accounting firms, or security experts.
These are often complex matters from a legal, process and technical perspective. Depending on the nature and extent of the potential misconduct, the stakes can be very high, with risks that include legal jeopardy, large fines or damages, negative publicity, and damage to company culture and morale. Speed and efficiency are vital: organizations need to understand the extent of the problem and act immediately to prevent further damage.
Key phases of an internal investigation
An internal investigation typically follows five key phases: a trigger event; a legal hold and custodian interviews; requests for data and data collection; processing, review and analysis of files; and the recommendation of next steps. COVID-19 and work-at-home requirements are most relevant to the second and third phases, in which interviews take place and data is requested and collected.
A trigger event kicks off an action from a legal, compliance, or investigative standpoint. While complaints to HR alleging discrimination or harassment based on race or gender are among the most common triggers of an internal investigation, other triggers include leaked or stolen intellectual property, whistle-blower complaints alleging fraud or compliance violations, the loss or theft of physical assets, or leaked or stolen data containing sensitive or personally identifiable information (PII).
In the next phase, legal hold and custodian interviews, the legal department must quickly perform an assessment of the veracity of the allegation(s) and the degree of risk involved, and then determine whether further investigative action is required. If a decision to continue is made, a legal hold is immediately put in place.
While some companies may be able to preserve information by working with their IT department without notifying the person(s) being investigated, in other cases the organization may need to send an official notification to the person(s) and ask for their cooperation in preserving information. The latter option is more common, especially in the age of COVID-19.
Initial interviews will often expand the scope of the investigation. A custodian may say, “I only worked on that project for a week; X was the driving force behind it,” or “I’ve only been with the company for a month, but Y and Z have been working on this since last year.” As the number of custodians grows, so does the number of devices to collect data from. Data locations and data types also have a tendency to multiply, with sources ranging from corporate email, text messages, file shares, and “loose” files stored on local devices or thumb drives to cloud storage like Office 365, Dropbox, Google Vault and even, in some cases, surveillance video.
After custodian interviews, it’s time for the request for and collection of data. The complexity at this stage depends to a large extent on the company’s information infrastructure. Especially during the pandemic, cloud-based data or work product saved in a virtual environment will be more straightforward to collect than on-premise data or data stored locally on a mobile device. Collection can become especially challenging with work-at-home requirements. A custodian may need to allow a forensics professional to access their device(s) at home. In other cases a device—which the custodian presumably needs to do their job—may need to be shipped.
Before COVID-19, an employee under investigation could be surprised with an on-the-spot collection at the office under the guise of an in-person meeting or “routine” request to bring in a device for an IT upgrade or a mandatory security update. Such strategies are much less likely to be practical or successful in a remote work environment. “At home” collection may also become impossible if the employee has opted to work from a second home or another location in a different region.
Employees using their own devices for remote work present a further complication. Devices like personal phones or tablets usually lack many of the security protections embedded in a company-provided mobile device and are therefore more vulnerable to malware, spyware, and co-mingled (personal and work-related) data. The data is also much more likely to be accessible by family and friends, increasing the potential for vulnerability as well as foul play. Upon collection, such data will often need to go through more extensive screening, and custodians may be more reluctant to cooperate when personal information is stored on a device targeted for collection. It is also possible they may use the virus as a pretext and refuse to allow a forensic professional into their home.
Increasing numbers of companies are turning to remote assisted collection kits (RACKs), which allow a forensic investigator to gain access to a device online and gather data directly from it. While RACK collections are forensically sound and legally defensible, some RACKs are designed to create a forensic image of a device and can consume large amounts of Internet bandwidth in the process. With less robust home connections, this can result in the disruption of ordinary work, or perhaps open the door to delaying tactics or data erasure on the part of custodians who have something to hide.
Once the data collection phase is complete, COVID-19-related constraints on the investigation recede from the picture. Processing, reviewing and analyzing files can proceed as normal—although review teams will be dispersed and have to be managed via a virtual collaborative workspace. The last phase in the investigation, recommending a next step, involves either closing the investigation, expanding it or possibly bringing in third parties such as a managed document review company and/or outside counsel.
Given the complexity of many internal investigations and the risks involved, it’s surprising how many organizations conduct them in an ad hoc manner. This is asking for trouble, especially in the age of COVID-19. Careful planning, clear policies and a consistent, formal process are essential. Each matter should begin with the development of a step-by-step plan based on the type of event and the trigger.
Detailed documentation is crucial every step of the way, so stakeholders can continually monitor progress while assessing scope and risk, and to be certain information is gathered in a legally defensible way. Documentation should address:
1. The investigation plan, processes and updates.
2. The data chain of custody.
3. The scope of the investigation, which needs to be legally “reasonable.”
In addition to working closely with the IT department, the investigation team should also consider engaging a company that specializes in forensic collections and solicit the input of the organization’s trusted eDiscovery provider. While some companies do not routinely use eDiscovery tools in internal investigations, these tools can save significant time and money in the culling, analysis, and review of data, particularly when they have a built-in cloud collections capability. AI technologies can dramatically speed up the process while minimizing human error and increasing accuracy, especially in investigations involving large volumes of data.
AI tools also have tremendous potential for companies seeking to apply more proactive controls over information governance and record management, identify security potential vulnerabilities before they become serious liabilities, and perform regular compliance audits. For example, these tools can perform privacy audits and assess an organization’s vulnerability to violations of regulations like the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). AI technologies can also be deployed to look for data anomalies that may indicate security breaches or suspicious behavior.
While the age of COVID-19 presents new challenges for internal investigations, companies should be able to weather the storm by identifying which processes in their investigation workflows will need to change, carefully following best practices, and ensuring they have appropriate, scalable technologies that can be deployed quickly when a new matter emerges.
There doesn’t seem to be an end in sight to the COVID-19 crisis, but there are some important end-of-life/end-of-support dates we should be aware of when it comes to software.
Before we dig into this month’s forecast of updates, I want to spend a little time on the importance of planning ahead to avoid the high costs associated with extended support contracts, or sometimes worse, modifying your network environment to mitigate risks.
Remember when Windows XP end-of-life was a ‘date on the horizon’ that you would deal with when it got closer? Suddenly Windows 7 has reached the same point. In fact, we’ve just gone over the six-month point in the first year of Extended Support Updates for Windows 7 and Server 2008.
The operational lifespan of an operating system version is shrinking, and the model has changed as Microsoft moved to the software-as-a-service model for Windows 10. It is imperative we keep track of critical dates associated with both operating systems and applications in order to maintain a functional work environment.
Microsoft has extended the support dates on a few operating systems, but those dates are rapidly approaching. The Enterprise and Education editions of Windows 10 versions 1709 and 1803 reach end of service in October and November respectively this year. The Home and Professional editions of Windows 10 version 1809 reach end-of-service in November as well. Double check your applications to ensure compatibility as you make the operating system upgrades on these systems – you only have 2-3 months left!
We have a little breathing room for the remaining non-Windows 10 operating systems. Both Windows 8.1 and the Server 2012 variations reach their end-of-extended-support in October 2023. Once we reach that point in time, we’ll only have Windows 10 left (or the latest new operating system from Microsoft).
There will be situations where you’ll reach the end of support and there won’t be new patches for the system, but you need to maintain the operating systems and their legacy applications to meet business needs. You’ll need to look at other options to mitigate the security risks introduced by these increasingly vulnerable systems.
Consider virtualization or locking down the system to run only the specific applications you need. Electronic separation is another option—moving them from direct internet connectivity or into more protected parts of your network. Heightened monitoring through next-gen antivirus or endpoint detection and response solutions can also provide added protection. Choose what works best for you but have a plan and timeline in place for their replacement.
My forecast last month was accurate with regards to record numbers of CVEs addressed. I don’t believe we’ll see this sustained growth but expect a higher than average number to be addressed again this month.
August 2020 Patch Tuesday forecast
- Expect a normal set of operating system and application updates, including ESUs, from Microsoft. I’ve been anticipating a SQL server or Exchange server update, so maybe it will happen this month?
- Every operating system received a service stack update (SSU) last month. We may get a break here next week.
- In keeping with the ‘planning for the end’ theme this month, Adobe Flash reaches end-of-life at the end of the year. Plan accordingly because a lot of applications still rely on Flash. Adobe may be giving Flash extra attention as we near the end of its life, so be on the lookout.
- We have a pre-notification from Adobe that APSB20-48 for Acrobat and Reader should release on patch Tuesday.
- Apple released security update 12.10.8 for Windows iTunes at the end of July. We could see a similar update for iCloud this week.
- Google Chrome 85 is in the beta channel and may be released next week.
- Mozilla provided security updates for Firefox 79, Firefox 68 ESR and 78 ESR, as well as Thunderbird 68 and 78 the last week of July. There is a small possibility of a minor security update for some of these applications next week.
The days of sitting on an operating system for 5-10 years with just patching are gone. Patching remains critical for the tactical protection of your systems, but strategic planning for the ongoing upgrades of operating systems and applications is the key to their long-term stability and security.
It’s no secret that the current pandemic is causing a major strain on consumers and businesses alike. As the U.S. teeters on the verge of a recession, companies are cutting their spending wherever they can — including in cybersecurity. Gartner estimates that security faces cuts as high as $6.7 billion — an unfortunate outcome, particularly since most organizations are also experiencing an expansion of their attack surface as a result of more people working from home.
In some ways, cuts in security budget aren’t surprising. Security has experienced growing budgets for years, but many security professionals have a hard time explaining to executives and board members what, exactly, they’re getting for the spend. Executives have struggled to understand cyber risk for some time, and in a tough economic environment, security is easier to put on the chopping block if it is perceived as a “tax” on the business.
But while some security programs have become bloated, many don’t necessarily deserve to be cut. Given the gravity of today’s situation, it’s time for security leaders to step in and do what they can to justify spending that bolsters their company’s overall security posture. With the right strategy in place, these leaders can be properly equipped to save their organizations from major monetary losses and damage to their brand reputation.
Speaking the “board member” language
Executives and board members have been known to have their doubts about the ROI of their security investments. Their days are driven by facts and figures — and security performance is too often discussed and evaluated in vague terms (ranging on a scale from low to high) that don’t resonate with leaders.
For senior management to really understand the effectiveness of good security measures, security leaders need to leverage quantitative metrics and share something more concrete to demonstrate the high value a strong security strategy brings. There are many strategic and tactical measurements that security leaders can share with executives and the board that demonstrate the effectiveness of programs and technology deployment. Some common metrics used to demonstrate program effectiveness include tracking number of malware incidents blocked or percentage of phishing emails filtered.
But it’s important to balance your own view with that of an independent third party perspective too. Objective, quantitative metrics like security ratings, for example, can be useful in providing comparative analysis and meaningful correlation to security outcomes. The lower the security rating given to the company, the more likely they are to experience a breach — and the more urgent and important it is to deploy the necessary services to avoid a potential disaster. Furthermore, some security ratings are used frequently in insurance underwriting and customer decision making, affirming the importance of understanding that metric at the senior-most level of the organization.
Using a specific kind of metric, security leaders have a better chance of grabbing the C-suite’s attention. The right data has the ability to prove to decision makers just how important security is.
Enabling the remote workforce
Everyone’s business faces challenges from COVID-19, and companies need to focus on enabling their workforce to succeed. Security must recognize that they play a critical role in helping the business during these challenging times, but they can’t just say “no” to everything.
One challenge that many are dealing with right now is enabling the remote workforce. Companies don’t have many options at this point, so workers must be allowed to access the corporate network in their home offices. But we also know that residential IPs account for more than 90% of all observed malware infections, making it much more risky.
Security professionals can help their businesses by developing capabilities that allow for continuous identification of vulnerabilities and infections on IP addresses associated with remote and home offices. Doing so will allow security teams to discover issues quickly, and more effectively manage higher risk remote operating environments. In other words, they’ll be able to ensure no harm comes to their organization while its employees work remotely.
Enabling business partnerships
Another example of how security can enable the business during these challenging times is through more efficient and effective onboarding of new vendors.
When the shift to work from home began months ago, organizations everywhere sought to onboard new vendors like Zoom. But how were they going to effectively perform risk assessments on organizations in hours or days, rather than the 8-12 week time frame that it typically takes to do a third party cyber risk assessment?
By leveraging data and automation, security leaders can transform their third party risk management programs, rapidly assessing and onboarding vendors to ensure that the business can start working with vendors to help achieve their goals. These efforts can actually be better in identifying risk than the typical qualitative, on-site assessment process, which is usually thought of as a snapshot in time. Security professionals shifting their programs can be more responsive to the business and establish a stronger working relationship during challenging times.
The power of benchmarking
Another way to get the C-suite’s attention? Competitive analysis. By benchmarking a company’s security program against competitors, security teams can highlight areas where their programs are performing in line — or out of line — with peers and competitors. In this day and age, no executive or board member wants to be underperforming their industry; but when it comes to cybersecurity, measuring and benchmarking have always been challenging.
Data and analytics now provide security professionals with the ability to quantitatively and objectively measure their programs across a variety of categories — and many security pros effectively use these benchmarks to highlight areas of investment or justify new spend.
The way forward
Right now, security teams are facing an uphill battle as they work to keep their organizations safe and secure. They’re also facing significant budget challenges. It’s up to security leaders to step in and prove that they can combat the current threats their companies face, but with an eye toward cost-optimization and cost-savings.
Using a combination of the above strategies, security leaders have a better shot at justifying security spending during a time when budgets are being slashed. By focusing on measurement, business enablement (including work from home and vendor onboarding), and competitive benchmarking, security leaders can establish greater credibility across the business, in the C-suite, and in the boardroom.
The idea that security is everyone’s business is a familiar refrain. But as enterprises look to combine the speed of software delivery with both cybersecurity and business value, they need to incorporate the idea that business is everyone’s business too. When talking about governance with regard to software development and security, you cannot ignore the business. Security governance typically operates at two levels. The first involves business executives who recognize the importance of security and … More
The post Engaging business units in security governance: Why everyone should be concerned appeared first on Help Net Security.
The volume of business data worldwide is growing at an astounding pace, with some estimates showing the figure doubling every year. Over time, every company generates and accumulates a massive trove of data, files and content – some inconsequential and some highly sensitive and confidential in nature.
Throughout the data lifecycle there are a variety of risks and considerations to manage. The more data you create, the more you must find a way to track, store and protect against theft, leaks, noncompliance and more.
Faced with massive data growth, most organizations can no longer rely on manual processes for managing these risks. Many have instead adopted a vast web of tracking, endpoint detection, encryption, access control and data policy tools to maintain security, privacy and compliance. But, deploying and managing so many disparate solutions creates a tremendous amount of complexity and friction for IT and security teams as well as end users. The problem with this approach is that it comes up short in terms of the level of integration and intelligence needed to manage enterprise files and content at scale.
Let’s explore several of the most common data lifecycle challenges and risks businesses are facing today and how to overcome them:
Maintaining security – As companies continue to build up an ocean of sensitive files and content, the risk of data breaches grows exponentially. Smart data governance means applying security across the points at which the risk is greatest. In just about every case, this includes both ensuring the integrity of company data and content, as well as any user with access to it. Every layer of enterprise file sharing, collaboration and storage must be protected by controls such as automated user behavior monitoring to deter insider threats and compromised accounts, multi-factor authentication, secure storage in certified data centers, and end-to-end encryption, as well as signature-based and zero-day malware detection.
Classification and compliance – Gone are the days when organizations could require users to label, categorize or tag company files and content, or task IT to manage and manually enforce data policies. Not only is manual data classification and management impractical, it’s far too risky. You might house millions of files that are accessible by thousands of users – there’s simply too much, spread out too broadly. Moreover, regulations like GDPR, CCPA and HIPAA add further complexity to the mix, with intricate (and sometimes conflicting) requirements. The definition of PII (personally identifiable information) under GDPR alone encompasses potentially hundreds of pieces of information, and one mistake could result in hefty financial penalties.
Incorrect categorization can lead to a variety of issues including data theft and regulatory penalties. Fortunately, machines can do in seconds–and often with better accuracy–what it might take years for a human to do. AI and ML technologies are helping companies quickly scan files across data repositories to identify sensitive information such as credit card numbers, addresses, dates of birth, social security numbers, and health-related data, to apply automatic classifications. They can also track files across popular data sources such as OneDrive, Windows File Server, SharePoint, Amazon S3, Google Cloud, GSuite, Box, Microsoft Azure Blob, and generic CIFS/SMB repositories to better visualize and control your data.
Retention – As data storage costs have plummeted over the past 10 years, many organizations have fallen into the trap of simply “keeping everything” because it’s (deceptively) cheap to do so. This approach carries many security and regulatory risks, as well as potential costs. Our research shows that exposure of just a single terabyte of data could cost you $129,324; now think about how many terabytes of data your organization stores today. The longer you retain sensitive files, the greater the opportunity for them to be compromised or stolen.
Certain types of data must be stored for a specific period of time in order to adhere to various customer contracts and regulatory criteria. For example, HIPAA regulations require organizations to retain documentation for six years from the date of its creation. GDPR is less specific, stating that data shall be kept for no longer than is necessary for the purposes for which it is being processed.
Keeping data any longer than absolutely necessary is not only risky, but those “affordable” costs can add up quickly. AI-enabled governance can track these set retention periods and minimize risk by automatically securing or eliminating any old or redundant files longer required (or allowed). With streamlined data retention processes, you can decrease storage costs, reduce security and noncompliance exposure and optimize data processing performance.
Ongoing monitoring and management – Strong governance gets easier with good data hygiene practices over the long term, but with so many files to manage across a variety of different repositories and storage platforms, it can be challenging to track risks and suspicious activities at all times. Defining dedicated policies for what data types can be stored in which locations, which users can access it, and all parties with which it be shared will help you focus your attention on further minimizing risk. AI can multiply these efforts by eliminating manual monitoring processes, providing better visibility into how data is being used and alerts when sensitive content might have been shared externally or with unapproved users. This makes it far easier to identify and respond to threats and risky behavior, enabling you to take immediate action on compromised accounts, move or delete sensitive content that is being shared too broadly or stored in unauthorized locations, etc.
The key to data lifecycle management
The sheer volume of data, files and content businesses are now generating and managing creates massive amounts of complexity and risk. You have to know what assets exist, where they’re stored, the specific users have access to them, when they’re being shared, what files can be deleted, which need to be stored in accordance with regulatory requirements, and so on. Falling short in any one of these areas can lead to major operational, financial and reputational consequences.
Fortunately, recent advances in AI and ML are enabling companies to streamline data governance to find and secure sensitive data at its source, sense and respond to potentially malicious behaviors, maintain compliance and adapt to changing regulatory criteria, and more. As manual processes and piecemeal point solutions fall short, AI-enabled data governance will continue to dramatically reduce complexity both for users and administrators, and deliver a level of visibility and control that business needs in today’s data-centric world.
Privacy is a basic right and a necessary protection in the digital age to avoid victimization and manipulation.
In much of the world, privacy is considered a basic human right. For example, citizens in the European Union have the right to dignity. They respect individuals’ rights to a private life, to act without coercion, and to maintain control of their personal information. These aspects are so valuable that they are considered an integral part of EU society. Europe and most of the world have codified these rights into legislation largely due to the learnings of its past.
A society cannot have liberty without privacy. It can appear as a luxury, but it is important to the well-being of a free and just society.
Throughout history, races and groups of people have been persecuted due to their characteristics, affiliations, possessions, or beliefs. Governments, powerful business entities, criminals, and influential organizations have often sought to obtain private information so they can malign individuals and control or manipulate the masses. Privacy has been one of the shields used to protect people from unjust victimization.
Invasion of privacy as a weapon
During WWII, the Axis powers targeted specific races and religions, to the point of near genocide. Many of those who survived did so because they were able to keep their information private, essentially hiding in the crowd. We witnessed the persecution of people demonstrating for democracy during the Arab Spring movements. Their digital signatures and locations were harvested by oppressive governments to identify people attending public rallies.
Many governments and employers actively spy on their citizens to monitor for undesired ideas, discussions, or dissent. Violators are then prosecuted or re-educated to align with what those in authority deem appropriate. Without the benefit of anonymity, citizens’ desire to express their thoughts is effectively repressed.
Governments undermine privacy to control or influence people. In the United States, during the recent Black Live Matters protests, surveillance concerns have resulted in IBM, Microsoft, and Amazon rethinking their participation in providing facial recognition solutions to law enforcement. Protecting privacy is crucial for whistleblowers who come forward to expose injustice. Investigative reporters are ethically bound to protect the identity of their confidential sources for this very reason. Harassment and mistreatment can remain hidden at a tremendous scale when people are fearful of reporting issues because they feel they can be identified.
Privacy protects the innocent from oppression
Autocratic regimes, whether it’s the highest level of government or caustic management of a business, often suppress complaints and new ideas that might undermine their authority or reveal inappropriate acts. Privacy allows dissension, reporting of issues, expression of ideas, constructive resolution of disagreements, and liberty to be heard. Privacy strengthens a community and gives victims a voice by safeguarding free speech that is necessary to counter oppression.
In the digital era, privacy goes beyond anonymity as it also protects people from victimization and manipulation. Society has embraced technology to get educated, communicate, conduct business, and form relationships. Our viewpoints and opinions are strongly influenced by what we learn from local, national, and international news sources.
Data is the new oil
We heavily contribute to the digital landscape through our actions and decisions. Our digital fingerprints are everywhere. They tell a story of where we go, what we do, who we like or dislike, and what we think. They are created by every click we make and every file, application, and device we use. When that data is aggregated, it can provide tremendously powerful insights about a person or community – enough to build complex and accurate personas.
This information is commonly used to manipulate people’s beliefs and behaviors. Online shopping is a perfect example: targeted marketing and data-driven advertising is a big business because it is successful at getting people to spend money. It all comes back to knowing what people are doing, thinking, saying, consuming, and watching. Having access to vast amounts of private data gives advertisers the ability to craft timely and meaningful messages that pull people into desired behaviors.
But if retailers can get people to buy things they don’t need, what else can private data be used for? How about changing what people think, who they support, their political views, what should become a law, and what to believe? The use of private information has long been leveraged to promote, vilify, or persecute various religions and political parties and leaders.
In the last few decades, how global citizens receive their news has changed. The news and entertainment segments have begun to blend, often reporting facts with embellishments and opinionated stories to sway public opinions. The more private information that is known, the easier it becomes to influence, convince, cajole, or threaten people.
More data = More power
A veil of privacy can shield both benefits and abuses. The current trend is to establish and extend privacy rights for the benefit of citizens. This reduces digital victimization, manipulation, and exploitation by protecting sensitive data and allows for activities that promote liberty and free speech.
Without laws, governments and businesses have evolved practices that leverage the power of gathering sensitive information and using it to their own advantage. New privacy laws (GDPR, CCPA) are changing the landscape with many ethical companies downshifting their collection efforts to be more conservative and respectful. They are also showing flexibility in how they treat, protect, and share such data.
Some governments and agencies are also reducing collection, limiting retention, or ending domestic programs that are considered invasive by citizens. At the same time, law enforcement agencies want to retain capabilities to detect and investigate crimes, to protect the security and safety of citizens.
Privacy is also misused. It is the preferred tool for those committing crimes and allows heinous acts against others to remain undetected. It can conceal terrible acts and allow widespread coordination of fraud, abuse, and terror.
Backdoors and master keys
The argument is made that digital backdoors, master keys, and encryption algorithms that gain access to systems and private information would assist in the lawful detection of criminal activity and in investigations to identify terrorists. Although that sounds like a great tool against criminals, it is a Pandora’s Box.
The problem is twofold.
Backdoors and master keys don’t limit access for a specific investigation where probable cause exists, but rather they enable widespread surveillance and data harvesting of an entire population, including law-abiding citizens. This violates people’s right to privacy and opens the door for manipulation and political prosecution. The ability to read every text, email, message, and online conversation to “monitor” the population creates a clear path to abuse. The risk of control and exploitation is real.
Even for those who have no objection to their government having access, we must consider the fact that such backdoors and master-keys would be sought by cybercriminals and other nation-state actors. No system is infallible. Eventually, such tools would be found and used by criminals to the detriment of the global digital community. Some backdoors could be worth tens of billions of dollars to the right buyer as they could unlock unimaginable power to seize wealth, affect people, damage nations, undermine independence, and stifle free thought.
Protecting privacy is not about hiding information. It is about the ability to be free from unwanted influence, tyranny, and to communicate with others in ways that challenge the status quo. Privacy protects individuals but also the underpinnings of a free society.
A complicated situation
Privacy is not an easy topic and there is no perfect solution. It is a dynamic situation and will continue to shift with public sentiment.
Everyone wants some level of discretion, confidentiality, and space. Nobody wants their passwords, family finances, details of personal relationships, medical history, location, purchases, and private discussions exposed. Nor do people enjoy being flooded with spam, phishing, and relentless sales calls. Privacy is not necessarily about hiding something, as it is about limiting information to those with a right-to-know.
Too little privacy can undermine free speech, liberty, and the reporting of victimizations. It also empowers powerful entities to manipulate people’s digital world to coerce, manipulate, and victimize them. Too much privacy can allow criminal actors to thrive and hide from authorities.
A balance must be struck.
Contributing author: Lisa Thee, Lead, Launch Consulting Group.