According to HP Enterprise’s Business of Hacking report, ad fraud is the easiest and most lucrative form of cybercrime, above activities such as credit card fraud, payment fraud and bank fraud. Luke Taylor, COO and Founder of TrafficGuard, explains why businesses should do what they can to detect and prevent it.
What is ad fraud?
Invalid traffic, which encompasses advertising fraud, is any advertising engagement that is not the result of genuine interest in the advertised offering. This could be fake clicks generated by malware, competitors clicking ads in order to drain your ad spend, or users clicking ads by accident. Ad fraud is a subset of invalid traffic, characterized by its malicious intentions, and has been around for as long as digital advertising.
Every time a consumer sees or clicks on an advertisement, the company advertising pays the website for that displayed ad, as well as any number of adtech vendors and traffic brokers that facilitate the process such as ad networks and exchanges. The more advertising engagement, the more money goes to the pockets of these vendors. Some genuinely grow their audiences, while others use trickery to get non-genuine human engagement or fake bot engagements.
Ad fraud and other forms of invalid traffic can cost up to 30% of an advertiser’s budget. Due to a lack of solutions, many advertisers have become complacent with this aggressive attrition to their ad campaigns, considering it an additional cost of online advertising. In 2018, advertisers lost $44 million of advertising spend per day to fraudulent traffic in North America alone. It’s anticipated to reach $100 million a day by 2023.
The reality is the advertising ecosystem is quite complex, making it difficult for businesses to see whether ad fraud is impacting them. As a result, businesses aren’t taking steps to check their risk, let alone seek protection.
How common is this form of cybercrime and does it affect everyone equally?
Wherever there is money in digital advertising, there is invalid traffic. All digital channels, all geographies and all players in the advertising ecosystem. Every advertiser is aware that ad fraud exists, however, most reject the idea that it is happening to them, because it’s difficult to detect without the proper tools. However, just because one chooses not to see the problem, doesn’t mean it’s not there – advertising fraud makes its way into every campaign (CPM, PPC, install campaigns) and every stage of the advertising journey (impressions, clicks, installs, events).
With fraud mitigation and ad quality assurance tools, businesses could achieve big improvements to their advertising performance. The average company now spends 16% of its IT budget on cybersecurity protection measures, yet the issue of ad fraud goes unaddressed, as security decision makers remain oblivious to this challenge. From fake mobile display traffic to bots, ad fraudsters are undercutting businesses’ marketing and customer acquisition efforts.
How do these fraudsters operate, what’s in it for them and how much money are they “collecting” from businesses’ advertising budgets?
Ad fraud is both easier to commit and more costly to businesses than other forms of fraudulent activity. Sophisticated criminal organizations are making billions from ad fraud. The reality is that it’s nearly impossible to pinpoint their exact origins given how complex the digital advertising ecosystem is. Like any successful business, fraudsters are adapting and diversifying in the pursuit profit. The more funds that flow to fraud, the more attractive and formidable this type of cybercrime will become. The more money that flows to fraud perpetrators, the less effective the whole digital advertising ecosystem will be.
What are its consequences on businesses’ bottom line and intelligence?
In addition to drained advertising budgets, there are several other negatives consequences coming from ad fraud that limit businesses’ bottom lines, intelligence and ability to grow.
Ad fraud, and other forms of invalid traffic skew advertising performance data. This is quite detrimental to marketing efforts, affecting everything from future budgeting to campaign optimization. The impact doesn’t just stop at advertising. Product, user experience and website design teams rely on data to improve the customer experience. If their baseline data is skewed, their efforts can be spent in the wrong areas.
Fraudulent advertising activity also reduces the effectiveness of the digital advertising ecosystem for everyone. Advertising intermediaries, the companies who connect advertisements to traffic sources, must spend time and money to address ad fraud. This reduces their ability to scale advertising to the best quality sources of traffic – limiting growth for all advertisers.
How can business protect their digital ad campaigns from this illicit activity?
The cost of ad fraud is much bigger than just the wasted media spend, which is why it is imperative to evade. Preventative, transparent tools which stop fraud at the source are the most effective. This prevents wasted media spend, polluted data and the time-consuming process of manual volume reconciliations.
Optimization is significantly more effective when based on verified traffic data, enabling you to safely and confidently scale your advertising. Some anti-fraud tools occur in a black box, where you’re asked to trust that it works. Businesses should have access to reporting that shows you how fraud prevention is helping your business overall. Transparency is essential to be able to see clear and defendable reasons for each invalidation.
Ransomware groups have realized that their tactics are also very effective for targeting larger enterprises, and this resulted in a 31% increase of the average ransom payment in Q3 2020 (reaching $233,817), ransomware IR provider Coveware shared in a recently released report.
They also warned that cases where the attackers exfiltrated data and asked for an additional ransom to delete it have doubled in the same period, but that paying up is a definite gamble.
“Despite some companies opting to pay threat actors to not release exfiltrated data, Coveware has seen a fraying of promises of the cybercriminals (if that is a thing) to delete the data,” they noted.
The data cannot be credibly deleted, it’s not secured and is often shared with other parties, they said. Various ransomware groups have posted the stolen data online despite having been paid to not release it or have demanded another payment at a later date.
“Unlike negotiating for a decryption key, negotiating for the suppression of stolen data has no finite end. Once a victim receives a decryption key, it can’t be taken away and does not degrade with time. With stolen data, a threat actor can return for a second payment at any point in the future,” the company said.
“The track records are too short and evidence that defaults are selectively occurring is already collecting. Accordingly, we strongly advise all victims of data exfiltration to take the hard, but responsible steps. Those include getting the advice of competent privacy attorneys, performing an investigation into what data was taken, and performing the necessary notifications that result from that investigation and counsel.”
Coveware’s analyst also found that improperly secured Remote Desktop Protocol (RDP) connections and compromised RDP credentials are the most prevalent way in for ransomware gangs, followed by email phishing and software vulnerabilities.
What’s interesting is that the “popularity” of RDP as an attack vector declines as the size of the target companies increases, bacuse larger companies are typically wise enough to secure it. The attackers must then switch to using more pricy means: RDP credentials can be purchased for less than $50, but email phishing campaigns and vulnerability exploits require more effort and time/money – even if they are performed by another attacker who then sells the access to the gang.
“The foothold created by the phishing email or CVE exploit is used to escalate privileges until the attacker can command a domain controller with senior administrative privileges. Once that occurs, the company is fully compromised and data exfiltration + ransomware are likely to transpire within hours or days,” they explained.
Companies/organizations in every industry can be a target, but attackers seem to prefer those in the professional services industry, healthcare and the public sector:
The story of digital authentication started in an MIT lab in 1961, when a group of computer scientists got together and devised the concept of passwords. Little did they know the anguish it would cause over the next 50 years. Today, most people possess more than 90 username-and-password combinations and would rather click “Reset password” than try to remember them all.
Unfortunately, passwords are not only inconvenient, but dangerous as well – it’s a problem the world has been grappling with for the last 20 years, at least. Somewhere in the background, though, the authentication wheel has been turning and recently, at the Apple Worldwide Developer Conference (WWDC), two promising announcements were made.
But first, let’s backtrack a bit…
Everybody loves pizza
Authentication has evolved in several interesting ways. Two-factor authentication, for example, was developed in response to account takeover fraud – and it had its place. But when people started doubling up on the knowledge factor, we started seeing instances of knowledge-based authentication where, if you forgot your password, you could enter your mother’s maiden name, the title of your favorite book or your favorite food. Attackers could still succeed by guessing because, as it turns out, most people like pizza!
What if those scientists had started out differently and looked more closely at how other valuables were being protected?
House and car keys, for example, still represent strong possession factors that grant access to high-value assets. They’ve been used for ages with great success and, as a result, make the concept of possession as a primary factor easy for users to understand: “keep your keys safe, it grants you access.” There was never a need to add an extra layer of authentication.
Fast-forward to the digital era, and car keys have evolved to enable keyless entry. Houses, too, are commonly accessed with a remote. In both cases, unique challenge-response mechanisms are used for every transaction, making them impossible to intercept or copy.
Which brings me back to the first of two Apple announcements mentioned earlier.
Where physical meets digital
After much experimenting with identification and endpoints, the iPhone can now act as a car key. Though Apple devices are protected by biometrics and PINs, isn’t it ironic that after all this time, the iPhone – in all its sophisticated glory – has become like a physical key in a sense?
Had that MIT team been able to use an uncopiable “digital key,” perhaps today’s digital world would not be littered with billions of passwords, and attackers would have had to physically approach their victims to steals their keys. That would have cost money and exposed them to capture, making attacks much more costly and risky when compared to attacks that are carried out by sending out thousands of phishing emails at a time.
Of course, there have been several attempts to come up with alternatives. Many dedicated hardware devices have been used over the years with varying degrees of success, but no-one has ever hit the nail on the head.
Some companies allocated a number but did not generate it themselves. Instead, they used a number found or calculated on the device (like the phone’s IMEI or browser fingerprinting), breaking the challenge-response paradigm and nullifying the isolation principle. Others issued physical hardware (like keys) that created cost and distribution challenges, not to mention them being yet another thing for users to carry around.
A vision of endpoint perfection
Companies entering this space need to recognize the value of secure endpoints and find a solution that will:
- Ensure that each endpoint instance is allocated a unique, once-off value
- Ensure that each challenge-response mechanism is unique every time
- Limit the “key” to a single use and having a unique “key” for each mobile app
- Have the ability to issue new keys for each new use case and make the linking easy
- Have the ability to issue keys to devices that users already have in their possession
This can result in stable endpoints. Though certain requirements may force a business to include passwords here and there, the endpoint always needs to be the anchor.
When looking at companies that applied the security principles mentioned above, many arrived at similar solutions. The FIDO Alliance, for example, launched eight years ago to tackle the world’s over-reliance on passwords. They chose to focus mainly on protecting website logins. However, there are ways that businesses can obtain certifications and become FIDO compliant.
Android announced that FIDO would be built into their devices. Microsoft then followed suit, adding it to their authentication setup in Windows (Windows Hello). Only one dominant player remained – Apple – and they were silent. Then, suddenly, with iOS 13.3, Safari started supporting external FIDO tokens. So, when Apple joined the FIDO Alliance in February this year, many were already anticipating a WWDC unveiling – yes, the second announcement.
Now, the endpoint puzzle is finally complete and later this year, all major desktop (Windows and macOS) and mobile (iOS and Android) operating systems will feature built-in FIDO authenticators operating as secure endpoints.
Trusted endpoints: Where we need to be
The vision of trusted endpoints is becoming a reality and finally, context-specific identities can be provisioned into most consumer devices. Consumers can now trust in a physical device, not in some digital thing that can easily be lost or forgotten.
To succeed, attackers will need to gain access to the physical device, which is not easily done.
Of course, there are many challenges we still need to tackle. However, they pale in comparison to the potential that now exists to create exciting new customer journeys using a universal platform authenticator.
In today’s world, most external cyberattacks start with phishing. For attackers, it’s almost a no-brainer: phishing is cheap and humans are fallible, even after going through anti-phishing training.
Patrick Harr, CEO at SlashNext, says that while security awareness training is an important aspect of a multi-layered defense strategy, simulating attacks during computer-based training sessions is not an effective way to learn, because people don’t necessarily retain the information.
“Working from home, where there are more distractions, makes it even less likely that people really pay attention to these trainings. That’s why it’s not uncommon to see the same people who tune out training falling for scams again and again,” he noted.
That’s why defenders must preempt attacks, he says, and reinforce a lesson during a live attack. When something gets through and someone clicks on a malicious URL, defenders must be able to simultaneously block the attack and show the victim what the phisher was attempting to do.
Latest phishing trends
Harr, who has over 20 years of experience as a senior executive and GM at industry leading security and storage companies and as a serial entrepreneur and CEO at multiple successful start-ups, is now leading SlashNext, a cybersecurity startup that uses AI to predict and protect enterprise users from phishing threats.
He says that most CISOs assume phishing is a corporate email problem and their current line of defense is adequate, but they are wrong.
“We are detecting 21,000 new phishing attacks a day, many of which have moved beyond corporate email and simple credential stealing. These attacks can easily evade email phishing defenses that rely on static, reputation-based detection. That’s why we typically see 80-90% of attacks evading conventional lines of defense to compromise the network,” he told Help Net Security.
“Magnify this by 150,000 new zero-hour phishing threats a week, almost double the number of threats versus a year ago, and we can safely say, ‘Houston we have a problem!’”
They are seeing:
- More text-based phishing, with no actual links, across SMS, gaming, search services, ad networks, and collaboration platforms like Zoom, Teams, Box, Dropbox, and Slack, as well as attacks on mobile devices
- A proliferation of phishing payloads beyond credential stealing scams which have been around for ages
- An increase in scareware, where phishers attempt to scare people into taking an action, such as sharing an email
- Rogue software attacks embedded in browser extensions and social engineering schemes like the massive Twitter bitcoin scam that happened in July
“Finally, we’re seeing cybercriminals trying out innovative ways to evade detection. For example, bad actors may register a domain that lays dormant for months before going live,” he added, and noted that they’ve witnessed a 3,000% increase in the number of phishing attacks since everyone began working and learning from home, and they expect this growth trend will continue.
Advice for CISOs
His main advice to CISOs is not to be complacent and to be diligent: near term, mid-term, and long term.
“You’ve got to take a comprehensive, multi-layer phishing defense approach outside the firewall, where your biggest user population is working remotely, and inside the firewall for your internal users. You need to protect mobile devices and PC/Mac endpoints, with end-to-end encryption (E2EE) deployed,” he opined.
“You also have to be mindful of corporate users’ personal side as their personal and business lives have converged, and many people use the same devices and same credentials across personal and business accounts.
Thirdly, this type of attacks need to be prevented from happening. “Use AI-enabled defenses to fight AI-enabled attacks. Fight machines with machines and adopt a preemptive security posture.”
Finally: some attacks inevitably breach all defenses and they must be prepared to quickly detect and respond to attack, and perform the necessary cleanup.
A recent survey revealed that, on average, organizations must comply with 13 different IT security and/or privacy regulations and spend $3.5 million annually on compliance activities, with compliance audits consuming 58 working days each quarter.
As more regulations come into existence and more organizations migrate their critical systems, applications and infrastructure to the cloud, the risk of non-compliance and associated impact increases.
To select a suitable compliance solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Rupert Brown, CTO, Evidology Systems
There are no easy answers to selecting a compliance solution, and complexity is likely to increase due to both technology and political factors.
It’s probably best to tackle the problem along these lines while keeping in mind a few essential questions:
- What are you having to show compliance with – legal, process, behaviour, standard, policy, etc.?
- When do you need to show compliance? Is it a single date, a regular cycle or continuous assessment?
- How do you need to show compliance – is it a fixed formal calculation (position/balance sheet, etc.) or some sort of proof of effective surveillance/record keeping, or something else?
- Where is the compliance assessed – remotely by a regulatory authority or “on-premise” by an inspection/audit, or a technical “test”?
- Who is responsible for demonstrating compliance in the organization – designated officers/board members or just general operational roles?
- Why do you need to show compliance – is it due to a legal statute or is it for a business need, to gain access to a particular market or accreditation?
Once you have worked through these dimensions of the problem it will probably become apparent that “one size” doesn’t fit all and a portfolio of solutions will be required, as well as a significant adoption/”culture change” effort.
John Lee, President, CSS
Financial firms need a trusted partner that understands their top compliance challenges – from regulatory change to data management, TCO, risk and scalability. As the regulatory landscape evolves, keeping pace with change means having an effective and automated enterprise compliance management program.
Complementary technology, data analytics, regulatory expertise and managed services is also critical. Vendor risk can create a single point of failure in a compliance strategy. Multiple single vendors can add complexity and costs. When you’re integrating multiple data sources, you also need a reliable vendor to keep that data secure.
With the complexities of global compliance, do you have the right in-house technical capabilities and policies to future-proof your organization? Conduct a gap analysis and map out an end-to-end, integrated compliance solution instead of operating with disparate point solutions or large in-house teams that rely on manual processes.
To mitigate both operational and regulatory risk, look for an agile partner of size and depth that is credible and understands global regulation to respond quickly to changing requirements. Compliance rules should be managed in a dynamic way, and you need a higher quality of intelligence, global support and coverage.
Look to a managed service provider with the regulatory expertise to take preemptive measures and optimize your compliance vision – delivering tactical solutions to regulatory requirements while supporting your strategic growth expectations.
Haywood Marsh, General Manager, NAVEX Global
Risk and compliance professionals must constantly assess the unique and ever-changing factors that impact their ability to remain compliant, like regional and national regulatory requirements, security and IT risks, and risks from third parties.
They should look for an integrated risk and compliance solution that seamlessly supports this ongoing effort by aggregating the various external and internal compliance-related and operational risk information into a single, SaaS source that can remain flexible with changing variables and helps them build a more resilient and higher performing business.
Flexible, SaaS solutions can, for example, be configured to adhere to new data privacy laws and international mandates that are constantly being updated. This functionality is key for global companies operating in multiple locations, so they can ensure compliance with regional regulations.
Integrated solutions with a unified view of information are vital because they help all departments – from risk and compliance to legal and HR professionals – work together to better understand the challenges inherent in their business, and streamline risk and compliance management and reporting.
Risk and compliance solutions are arguably the most important part of managing and maintaining a high performing business. Given each program is unique to the organization that it belongs to, the solution should be configurable and equipped to encompass each compliance and risk management need.
These days, you’d be hard-pressed to find connected devices that do not come with companion smartphone applications. In fact, it’s very common for contemporary devices to offload most (if not all) of its display to the user handset.
Smartphones and the rise of IoT
Relying on the ubiquity of smartphones and the rise of remote controls, users and vendors alike have embraced the move away from physical device interfaces. This evolution in the IoT ecosystem, however, brings major benefits AND serious drawbacks.
While users enjoy the remote capabilities of companion apps and vendors bypass the need for hardware interfaces, studies show that they present serious cybersecurity risks. For example, the communication between an IoT device and its app is often not properly encrypted nor authenticated – and these issues enable the construction of exploits to achieve remote control of victim’s devices.
How the industry got here
It is important to explain that connected devices have not always been this way. I’m sure others like myself do not need to cast their minds far back to remember a time when smartphones did not even exist. User input during these halcyon days relied on physical interfaces on the device itself, interfaces that typically consisted of basic touch screens or two-line LCD displays.
Though functional, these physical interfaces were certainly limited (and limiting) when compared to the applications that superseded them. Devices without physical interfaces are smaller, consume less power, and look better. Developers, meanwhile, enjoy the relative ease of creating an app – with the additional support of software development kits – instead of manually programming physical interfaces. Perhaps most importantly, it’s many times cheaper for vendors to create devices with companion apps than to create devices with physical interfaces.
All that is without even starting on the benefits of remote connectivity! Smartphone apps enable users anywhere in the world to set the temperature of their air conditioning and record from their home security webcam with the click of a screen. These apps are simply much more expressive and intuitive than physical interfaces, enabling users to customize what they like from wherever they are. On the other hand, however, it is this element of remote connectivity which presents the compromise between usability and security.
The dangers of device companion apps
Unfortunately, the majority of companion apps have the potential to open devices to bad actors. Researchers last year found that about half are potentially exploitable through protocol analysis since they use local communication or local broadcast communication, thus providing an attack path to exploit lack of crypto or use of hardcoded encryption keys. Further, this study into companion apps from some of Amazon’s most popular devices found a lack of encryption in one-third of cases and the use of hardcoded keys in one-fifth of cases.
These findings were confirmed in another study where researchers tested more than 2000 device companion apps for security faults. The researchers found more than 30 devices from 10 vendors relied on the same cloud service to manage their devices, with the cloud service reporting security weakness that previously allowed attackers to take full control by device ID and password enumeration.
To make matters worse, there is little incentive for vendors to release fixes when vulnerabilities are uncovered. Most vendors in this space are small and medium-sized businesses that lack the budget for software quality control and security best practices. This issue is only exacerbated by the relative inexpensiveness of the devices they sell, meaning that vendors simply do not have the resources necessary to implement security best practices like monitoring agents or authentication hardware.
What users must do
The good news is that secure communication between a device and an app is possible. For example, EZVIZ smart home security applications support local communication between the companion app and the device over the local network. The shared encryption key is enclosed in the device box in the form of a QR code and must be scanned by the companion app. This strategy is better than hardcoded keys, provided that the key in the QR code is of sufficient length and randomness.
Another security workaround is possible to ensure that commands between the client and the device cannot be intercepted by a third-party. Peer-to-peer is a private connection type used by German smart heating and cooling provider SOREL to ensure its smartphone app communicates without interference. Moreover, the connection offers the company minimized risk since end users only manage their data on their device.
The bad news is that users today remain at the mercy of the vendors. There is currently no legislation that requires device makers to ensure that their devices or companion apps implement certain cybersecurity protocols. As we have seen time and again, vendor indifference to cybersecurity consistently results in subpar security protocols.
Therefore, the onus is on users to take extra cybersecurity steps in this context of vendor ambivalence. Until legislators catch up or manufacturers begin to implement stricter security protocols for their devices and apps, users will need to take matters into their own hands to make certain that the devices they bring into the workplace or the home are safe from outside forces. While the benefits of companion apps are clear, it is only the user who can prevent the worst dangers of these digital interfaces from becoming reality.
As the chief owners of the digital infrastructure that underpins all aspects of modern enterprises, CIOs must play pivotal roles in the road to recovery, “seeking the next normal” while still performing their traditional roles. A new IDC study outlines concrete actions that CIOs can and must take to create resilient and adaptive future enterprises with technology.
“In a time of turbulence and uncertainty, CIOs and senior IT leaders must discern how IT will enable the future growth and success of their enterprise while ensuring its resilience,” said Serge Findling, VP of Research for IDC‘s IT Executive Programs (IEP).
“The ten predictions in this study outline key actions that will define the winners in recovering from current adverse events, building resilience, and enabling future growth.”
Predictions to keep CIOs resilient
Prediction 1 – #CIOAIOPS: By 2022, 65% of CIOs will digitally empower and enable front-line workers with data, AI, and security to extend their productivity, adaptability, and decision-making in the face of rapid changes.
Prediction 2 – #Risks: Unable to find adaptive ways to counter escalating cyberattacks, unrest, trade wars, and sudden collapses, 30% of CIOs will fail in protecting trust —the foundation of customer confidence — by 2021.
Prediction 3 – #TechnicalDebt: Through 2023, coping with technical debt accumulated during the pandemic will shadow 70% of CIOs, causing financial stress, inertial drag on IT agility, and “forced march” migrations to the cloud.
Prediction 4 – #CIORole: By 2023, global crises will make 75% of CIOs integral to business decision making as digital infrastructure becomes the business OS while moving from business continuation to re-conceptualization.
Prediction 5 – #Automation: To support safe, distributed work environments, 50% of CIOs will accelerate robotization, automation, and augmentation by 2024, making change management a formidable imperative.
Prediction 6 – #RollingCrisis: By 2023, CIO-led adversity centers will become a permanent fixture in 65% of enterprises, focused on building resilience with digital infrastructure, and flexible funding for diverse scenarios.
Prediction 7 – #CX: By 2025, 80% of CIOs alongside LOBs will implement intelligent capabilities to sense, learn, and predict changing customer behaviors, enabling exclusive customer experiences for engagement and loyalty.
Prediction 8 – #Low/NoCode: By 2025, 60% of CIOs will implement governance for low/no-code tools to increase IT and business productivity, help LOB developers meet unpredictable needs, and foster innovation at the edge.
Prediction 9 – #ControlSystems: By 2025, 65% of CIOs will implement ecosystem, application, and infrastructure control systems founded on interoperability, flexibility, scalability, portability, and timeliness.
Prediction 10 – #Compliance: By 2024, 75% of CIOs will absorb new accountabilities for the management of operational health, welfare, and employee location data for underwriting, health, safety, and tax compliance purposes.
Despite the fact that many organizations are turning to outside cybersecurity specialists to protect their systems and data, bringing in a third-party provider remains just a piece of the security jigsaw. For some businesses, working with a technology solutions provider (TSP) creates a mindset that the problem is no longer theirs, and as a result, their role in preventing and mitigating cybersecurity risks becomes more passive.
This is an important misunderstanding, not least because it risks setting aside one of the most powerful influences on promoting outstanding cybersecurity standards: employees. Their individual and collective role in defeating cybercriminals is well understood, and mobilizing everyone to play a role in protecting systems and data remains critical, despite ongoing improvements in cybersecurity technologies. Every stakeholder in this process has a role to play in avoiding the dangers this creates, TSPs included.
Despite the increasing sophistication of cyber attacks, TSPs that invest in key foundational, standardized approaches to training put their clients in a much stronger position. In particular, helping end users to focus on phishing and social engineering attacks, access and passwords, together with device and physical security can close the loop between TSP and end users and keep cybercriminals at bay.
Access, passwords, and connection
TSPs have an important role to play in training end users about key network vulnerabilities, including access privileges, passwords, and also the network connection itself. For instance, their clients should know who has general or privileged access.
As a rule, privileged access is reserved for users who carry out administrative-level functions or have more senior roles that require access to sensitive data. Employees should be informed, therefore, what type of user they are in order to understand what they can access on the network and what they can’t.
Passwords remain a perennial challenge, and frequent reminders about the importance of unique passwords is a valuable element of TSP training and communication strategy. The well-tried approach of using at least eight characters with a combination of letters and special characters, and excluding obvious details like names and birthdays can mitigate many potential risks.
There are also a wide range of password management tools that can help individuals achieve a best practice approach – TSPs should be sharing that insight on a regular basis.
In addition, employees should be cautious about using network connections outside of their home or work. Public networks – now practically ubiquitous in availability – can expose corporate data on a personal device to real risk. It’s important to educate and encourage end users to only connect to trusted networks or secure the connection with proper VPN settings.
Social engineering and phishing
An attack that deceives a user or administrator into disclosing information is considered social engineering, and phishing is one of the most common. These are usually attempted by the cybercriminal by engaging with the victim via email or chat, with the goal to uncover sensitive information such as credit card details and passwords.
The reason they are so successful is because they appear to come from a credible source, but in many cases, there are some definitive clues that should make users/employees suspicious. These can include weblinks containing random numbers and letters, typos, communication from senior colleagues that doesn’t usually occur, or even just a sense that something feels wrong about the situation.
But despite the efforts of cybercriminals to refine their approach to social engineering, well established preventative rules have remained effective. The first is – just don’t click. End users should trust their suspicions that something might not be right, they shouldn’t click on a link or attachment or give out any sensitive information. Just as important is to inform the internal IT or the TSP.
Alerting the right person or department in a timely manner is critical in preventing a phishing scam from having company-wide repercussions. TSPs should always encourage clients to seek their help to investigate or provide next steps.
Physical and device security
Online threats aren’t the only risks that need to be included in end user training – physical security is just as important to keeping sensitive information protected. For example, almost everyone will identify with the stress caused by accidentally leaving their phone or tablet unguarded. And unfortunately, many of us know what it’s like to lose a phone or have one stolen – the first worry that usually comes to mind is about the safety of data.
The same risks apply to workplace data if a device is left unattended, lost or stolen, but there are ways to help end users minimize the risk:
1. Keep devices locked when not in use. For many smartphone users, this is an automatic setting or a good habit they have acquired, but it also needs to be applied to desktop computers and laptops, where the same approach isn’t always applied.
2. Secure physical documents. Despite the massive surge in digital document creation and sharing, many organizations still need to use physical copies of key documents. They should be locked away when not needed, particularly outside of working hours.
3. Destroy old and unwanted information. Data protection extends to shredding documents that are no longer needed, and TSPs should be including reminders about these risks as an important addendum to their training on digital security.
This also extends to the impact BYOD policies can have on network security. For TSPs, this is a critical consideration as the massive growth in personal devices connected to corporate networks significantly increases their vulnerability to attack.
BYOD users are susceptible to the same threats as company desktops and without pre-installed endpoint protection, can be even less secure. Mobile devices must, therefore, be securely connected to the corporate network and remain in the employee’s possession. Helping them to manage device security will also help TSP security teams maintain the highest levels of vigilance.
Empowering end users to guard against the most common risks might feel intangible to employers and TSPs alike, and in reality, they may never be able to measure how many attacks they have defeated. But for TSPs, employees should form a central part of their overall security service, because failing to work with them risks failing their clients.
Cloud adoption was already strong heading into 2020. According to a study by O’Reilly, 88% of businesses were using the cloud in some form in January 2020. The global pandemic just accelerated the move to SaaS tools. This seismic shift where businesses live day-to-day means a massive amount of business data is making its way into the cloud.
All this data is absolutely critical for core business functions. However, it is all too often mistakenly considered “safe” thanks to blind trust in the SaaS platform. But human error, cyberattacks, platform updates and software integrations can all easily compromise or erase that data … and totally destroy a business.
According to Microsoft, 94% of businesses report security benefits since moving to the cloud. Although there are definitely benefits, data is by no means fully protected – and the threat to cloud data continues to rise, especially as it ends up spread across multiple applications.
Organizations continue to overlook the simple steps they can take to better protect cloud data and their business. In fact, our 2020 Ecommerce Data Protection Survey found that one in four businesses has already experienced data loss that immediately impacted sales and operations.
Cloud data security illusions
Many companies confuse cloud storage with cloud backup. Cloud storage is just that – you’ve stored your data in the cloud. But what if, three years later, you need a record of that data and how it was moved or changed for an audit? What if you are the target of a cyberattack and suddenly your most important data is no longer accessible? What if you or an employee accidentally delete all the files tied to your new product line?
Simply storing data in the cloud does not mean it is fully protected. The ubiquity of cloud services like Box, Dropbox, Microsoft 365, Google G Suite/Drive, etc., has created the illusion that cloud data is protected and easily accessible in the event of a data loss event. Yet even the most trusted providers manage data by following the Shared Responsibility Model.
The same goes for increasingly popular business apps like BigCommerce, GitHub, Shopify, Slack, Trello, QuickBooks Online, Xero, Zendesk and thousands of other SaaS applications. Cloud service providers only fully protect system-level infrastructure and data. So while they ensure reliability and recovery for system-wide failures, the cloud app data of individual businesses is still at risk.
In the current business climate, human errors are even more likely. With the pandemic increasing the amount of remote work, employees are navigating constant distractions tied to health concerns, increasing family needs and an inordinate amount of stress.
Complicating things further, many online tools do not play nicely with each other. APIs and integrations can be a challenge when trying to move or share data between apps. Without a secure backup, one cyberattack, failed integration, faulty update or click of the mouse could wipe out the data a business needs to survive.
While top SaaS platforms continue to expand their security measures, data backup and recovery is missing from the roadmap. Businesses need to take matters into their own hands.
Current cloud backup best practices
In its most rudimentary form, a traditional cloud backup essentially makes a copy of cloud data to support business continuity and disaster recovery initiatives. Proactively protecting cloud data ensures that if that business-critical data is compromised, corrupted, deleted or inaccessible, they still have immediate access to a comprehensive, usable copy of the data needed to avoid business disruption.
From multi-level user access restrictions, password managers and regularly timed manual downloads, there are many basic (even if tedious) ways for businesses to better protect their cloud data. Some companies have invested in building more robust backup solutions to keep their cloud business data safe. However, homegrown backup solutions are costly and time intensive as they require constant updates to keep pace with ever-changing APIs.
In contrast, third-party backup solutions can provide an easier to manage, cost/time-efficient way to protect cloud data. There is a wide range of offerings though – some more reputable and secure than others. Any time business data is entrusted to a third party, reputability and security of that vendor must take center stage. If they have your data, they need to protect it.
Cloud backup providers need to meet stringent security and regulatory requirements so look for explicit details about how they secure your data. As business data continues to move to the cloud, storage limits, increasingly complex integrations and new security concerns will heighten the need for comprehensive cloud data protection.
The trend of business operations moving to the cloud started long before the quarantine. Nevertheless, the cloud storage and security protocols most businesses currently rely on to protect cloud data are woefully insufficient.
Critical business data used to be stored (and secured) in a central location. Companies invested significant resources to manage walls of servers. With SaaS, everything is in the cloud and distributed – apps running your store, your account team, your mailing list, your website, etc. Business data in the backend of each SaaS tool looks very different and isn’t easily transferable.
All the data has become decentralized, and most backups can’t keep pace. It isn’t a matter of “if” a business will one day have a data loss event, it’s “when”. We need to evolve cloud backups into a comprehensive, distributed cloud data protection platform that secures as much business-critical data as possible across various SaaS platforms.
As businesses begin to rethink their approach to data protection in the cloud era, business backups will need to alleviate the worry tied to losing data – even in the cloud. True business data protection means not worrying about whether an online store will be taken out, a third-party app will cause problems, an export is fully up to date, where your data is stored, if it is compliant or if you have all of the information needed to fully (and easily) get apps back up and running in case of an issue.
Delivering cohesive cloud data protection, regardless of which application it lives in, will help businesses break free from backup worry. The next era of cloud data protection needs to let business owners and data security teams sleep easier.
In the past few years, the use of automation in many spheres of cybersecurity has increased dramatically, but penetration testing has remained stubbornly immune to it.
While crowdsourced security has evolved as an alternative to penetration testing in the past 10 years, it’s not based on automation but simply throwing more humans at a problem (and in the process, creating its own set of weaknesses). Recently though, tools that can be used to automate penetration testing under certain conditions have surfaced – but can they replace human penetration testers?
How do automated penetration testing tools work?
To answer this question, we need to understand how they work, and crucially, what they can’t do. While I’ve spent a great deal of the past year testing these tools and comparing them in like-for-like tests against a human pentester, the big caveat here is that these automation tools are improving at a phenomenal rate, so depending on when you read this, it may already be out of date.
First of all, the “delivery” of the pen test is done by either an agent or a VM, which effectively simulates the pentester’s laptop and/or attack proxy plugging into your network. So far, so normal. The pentesting bot will then perform reconnaissance on its environment by performing scans a human would do – so where you often have human pentesters perform a vulnerability scan with their tool of choice or just a ports and services sweep with Nmap or Masscan. Once they’ve established where they sit within the environment, they will filter through what they’ve found, and this is where their similarities to vulnerability scanners end.
Vulnerability scanners will simply list a series of vulnerabilities and potential vulnerabilities that have been found with no context as to their exploitability and will simply regurgitate CVE references and CVSS scores. They will sometimes paste “proof” that the system is vulnerable but don’t cater well to false positives.
Automated penetration testing tools will then choose out of this list of targets the “best” system to take over, making decisions based on ease of exploit, noise and other such factors. So, for example, if it was presented with an Windows machine which was vulnerable to EternalBlue it may favor this over brute forcing an open SSH port that authenticates with a password as it’s a known quantity and much faster/easier to exploit.
Once it gains a foothold, it will propagate itself through the network, mimicking the way a pentester or attacker would do it, but the only difference being it actually installs a version of its own agent on the exploited machine and continues its pivot from there (there are variations in how different vendors do this).
It then starts the process again from scratch, but this time will also make sure it forensically investigates the machine it has landed on to give it more ammunition to continue its journey through your network. This is where it will dump password hashes if possible or look for hardcoded credentials or SSH keys. It will then add this to its repertoire for the next round of its expansion. So, while previously it may have just repeated the scan/exploit/pivot, this time it will try a pass-the-hash attack or try connecting to an SSH port using the key it just pilfered. Then, it pivots again from here and so on and so forth.
If you notice a lot of similarities to how a human pentester behaves, you’re absolutely right: a lot of this is exactly how pentesters (and to a lesser extent) attackers behave. The toolsets are similar and the techniques and vectors used to pivot are identical in many ways.
So, what’s different?
First of all, the act of automation gives a few advantages over the ageing pentesting methodology (and equally chaotic crowdsourced methodology).
The speed of the test and reporting is many magnitudes faster, and the reports are actually surprisingly readable (after verifying with some QSA’s, they will also pass the various PCI DSS pentesting requirements).
No more waiting days or weeks for a report that has been drafted by human hands and gone through a few rounds of QA before being delivered. This is one of the primary weaknesses of human pen tests: the adoption of continuous delivery has caused many pen test reports to become out of date as soon as they are delivered, since the environment on which the test was completed has been updated multiple times since, and therefore, had potential vulnerabilities and misconfigurations introduced that weren’t present at the time of the pen test. This is why traditional pentesting is more akin to a snapshot of your security posture at a particular point in time.
Automated penetration testing tools get around this limitation by being able to run tests daily, or twice daily, or on every change, and deliver a report almost instantly.
The second advantage is the entry point. A human pentester may be given a specific entry point into your network, while an automated pentesting tool can run the same pen test multiple times from different entry points to uncover vulnerable vectors within your network and monitor various impact scenarios depending on the entry point. While this is theoretically possible with a human it would require a huge budgetary investment due to having to pay each time for a different test.
What are the downsides?
1. Automated penetration testing tools don’t understand web applications – at all. While they will detect something like a web server at the ports/services level, they won’t understand that you have an IDOR vulnerability in your internal API or a SSRF in an internal web page that you can use to pivot further. This is because the web stack today is complex and, to be fair, even specialist scanners (like web application scanners) have a hard time detecting vulnerabilities that aren’t low hanging fruit (such as XSS or SQLi)
2. You can only use automated pentesting tools “inside” the network. As most exposed company infrastructure will be web-based and automated pentesting tools don’t understand these, you’ll still need to stick to a good old-fashioned human pentester for testing from the outside.
To conclude, the technology shows a lot of promise, but it’s still early days and while they aren’t ready to make human pentesters redundant just yet, they do have a role in meeting today’s offensive security challenges that can’t be met without automation.
Connected devices are becoming more ingrained in our daily lives and the burgeoning IoT market is expected to grow to 41.6 billion devices by 2025. As a result of this rapid growth and adoption at the consumer and commercial level, hackers are infiltrating these devices and mounting destructive hacks that put sensitive information and even lives at risk.
These attacks and potential dangers have kept security at top of mind for manufacturers, technology companies and government organizations, which ultimately led to the U.S. House of Representatives passing the IoT Cybersecurity Improvement Act of 2020.
The bill focuses on increasing the security of federal devices with standards provided by the National Institute of Standards and Technology (NIST), which will cover devices from development to the final product. The bill also requires Homeland Security to review and revisit the legislation up to every five years and revise it as necessary, which will keep it up to date with the latest innovative tech and new standards that might come along with it.
Although it is a step in the right direction to tighten security for federal devices, it only scratches the surface of what the IoT industry needs as a whole. However, as this bill is the first of its kind to be passed by the House, we need to consider how it will help shape the future of IoT security:
Better transparency throughout the device lifecycle
With a constant focus on innovation in the IoT industry, oftentimes security is overlooked in order to rush a product onto shelves. By the time devices are ready to be purchased, important details like vulnerabilities may not have been disclosed throughout the supply chain, which could expose and exploit sensitive data. To date, many companies have been hesitant to publish these weak spots in their device security in order to keep it under wraps and their competition and hackers at bay.
However, now the bill mandates contractors and subcontractors involved in developing and selling IoT products to the government to have a program in place to report the vulnerabilities and subsequent resolutions. This is key to increasing end-user transparency on devices and will better inform the government on risks found in the supply chain, so they can update guidelines in the bill as needed.
For the future of securing connected devices, multiple stakeholders throughout the supply chain need to be held accountable for better visibility and security to guarantee adequate protection for end-users.
Public-private partnerships on the rise
Per this bill, for the development of the security guidelines, the government will need to consult with cybersecurity experts to align on industry standards and best practices for better IoT device protection.
Working with industry-led organizations can provide accurate insight and allow the government to see current loopholes to create standards for real-world application. Encouraging these public-private partnerships is essential to advancing security in a more holistic way and will ensure guidelines and standards aren’t created in a silo.
Shaping consumer security from a federal focused bill
The current bill only focuses on securing devices on a federal level, but with the crossover from manufacturers and technology companies working in both the commercial/government and consumer space, naturally this bill will infiltrate the consumer device market too. It’s not practical for a manufacturer to follow two separate guidelines for both categories of products, so those standards in place for government contracted devices will likely be applied to all devices on the assembly line.
As the focus will shift to consumer safety after this bill, the challenge for manufacturers to eventually test products against two bills – one with federal and one with consumer standards – has been raised in the industry. The only way to remedy the issue is if there are global, adoptable and scalable standards across all industries to streamline security and provide appropriate protection for all device categories.
Universal standards – Are we there yet?
While this bill is a great start for the IoT industry and may serve as the catalyst for future IoT bills, there is still some room for improvement for the future of connected device security. In its current form, the bill does not explicitly define the guidelines for security, which can be frustrating and confusing for IoT device stakeholders who need to comply with them. With multiple government organizations and industry-led programs creating their own set of standards, the only way to truly propel this initiative forward is to harmonize and clearly define standards for universal adoption.
While the IoT bill signals momentum from the US government to prioritize IoT security, an international effort needs to be in place for establishing global standards and protecting connected devices must be made, as the IoT knows no boundaries. Syncing these standards and enforcing them through trusted certification programs will hold manufacturers and tech companies accountable for security and provide transparency for all end-users on a global scale.
The IoT Cybersecurity Improvement Act of 2020 is a landmark accomplishment for the IoT industry but is only just the beginning. As the world grows more integrated through connected devices, security standards will need to evolve to keep up with the digital transformation occurring in nearly every industry.
Due to security remaining a key concern for device manufacturers, tech companies, consumers and government organizations, the need for global standards remains in focus and will likely need an act of Congress to make them a priority.
It’s safe to assume that we need to protect presidential election data, since it’s one of the most critical sets of information available. Not only does it ensure the legitimacy of elections and the democratic process, but also may contain personal information about voters. Given its value and sensitivity, it only makes sense that this data would be a target for cybercriminals looking for some notoriety – or a big ransom payment.
In 2016, more needed to be done to protect the election and its data from foreign interference and corruption. This year, both stringent cybersecurity and backup and recovery protocols should be implemented in anticipation of sophisticated foreign interference.
Cybersecurity professionals in government and the public sector should look to the corporate world and mimic – and if possible improve upon – the policies and procedures being applied to keep data safe. Particularly as voting systems become more digitized, the likelihood of IT issues increases, so it’s essential to have a data protection plan in place to account for these challenges and emerging cyber threats.
The risk of ransomware in 2020
Four years ago, ransomware attacks impacting election data were significantly less threatening. Today, however, the thought of cybercriminals holding election data hostage in exchange for a record-breaking sum of money sounds entirely plausible. A recent attack on Tyler Technologies, a software provider for local governments across the US, highlighted the concerns held across the nation and left many to wonder if the software providers in charge of presidential election data might suffer a similar fate.
Regardless of whether data is recoverable, ransomware attacks typically cause IT downtime as security teams attempt to prevent the attack from spreading. While this is the best practice to follow to contain the malware, the impacts of system downtime on the day of the election could be catastrophic. To combat this, government officials should look for solutions that offer continuous availability technology.
The best defense also integrates cybersecurity and data protection, as removing segmentation streamlines the process of detecting and responding to attacks, while simultaneously recovering systems and data. This will simplify the process for stressed-out government IT teams already tasked with dealing with the chaos of election day.
Developing a plan to protect the presidential election
While ransomware is a key concern, it isn’t the only threat that election data faces. The 2016 election revealed to what degree party election data could be interfered with. Now that we know the risks, we also know that focusing solely on cybersecurity without a backup plan in place isn’t enough to keep this critical data secure.
The first step to any successful data protection plan is a robust backup strategy. Since the databases or cloud platforms that compile voter data are likely to be big targets, government security pros should store copies of that data in multiple locations to reduce the chance that one attack takes down an entire system. Ideally, they should follow the 3-2-1 rule by keeping three copies of data, in two locations, with one offsite or in the cloud.
It’s also important to protect these backups with the same level of care as you would critical IT infrastructure. Backups are only helpful if they’re clean and easily accessible – particularly for a time-sensitive situation like the presidential election, it’s important to be able to recover backed-up data as quickly as possible. The last thing government officials need is missing or inaccessible votes on election day.
The need to protect this data doesn’t end when voting does, however. Government IT pros also must consider implementing a strategy for protecting stored voter data long-term. Compliance with data privacy regulations surrounding voter data is key to maintaining a fair democratic process, so they should make sure to consider any local regulations that may dictate how this data is stored and accessed. Protection that extends after the election will also be important for safeguarding against cyberattacks that might target this data down the line.
Not only could cyberattacks hold voter data hostage, they may also affect how quickly the results of the election can be determined. Voter data that is lost altogether might cause an entire election to be called a fraud. This would have a far-reaching impact on people across America, and our democratic process as a whole. Luckily, this is avoidable with a data protection and ransomware response plan that gets government officials prepared for when an attack happens.
Like most American businesses, middle market companies have been forced to rapidly implement a variety of work-from-home strategies to sustain productivity and keep employees safe during the COVID-19 pandemic. This shift, in most cases, was conducted with little chance for appropriate planning and due diligence.
This is especially true in regard to the security and compliance of remote work solutions, such as new cloud platforms, remote access products and outsourced third parties. Many middle market companies lacked the resources of their larger counterparts to diagnose and address potential gaps in a timely manner, and the pressure to make these changes to continue operations meant that many of these shortcomings were not even considered at the time.
Perhaps more important than the potential security risks that could come with these hastily deployed solutions is the risk that an organization could realize later that the mechanisms they deployed turned out to lack controls required by a variety of regulatory and industry standards.
Take medical and financial records as an example. In a normal scenario, an organization typically walls off systems that touch such sensitive data, creating a segmented environment where few systems or people can interact with that data, and even then, only under tightly controlled conditions. However, when many companies set up work-from-home solutions, they quickly realized that their new environment did not work with the legacy architecture protecting the data. Employees could not effectively do their jobs, so snap decisions were made to allow the business to operate.
In this situation, many companies took actions, such as removing segmentation to allow the data and systems to be accessible by remote workers, which unfortunately exposed sensitive information directly to the main corporate environment. Many companies also shifted data and processes into cloud platforms without determining if they were approved for sensitive data. In the end, these workarounds may have violated any number of regulatory, industry or contractual obligations.
In the vast majority of these circumstances, there is no evidence of any type of security event or a data breach, and the control issues have been identified and addressed. However, companies are now in a position where they know that, for a period of time (as short as a few days or months in some cases), they were technically non-compliant.
Many middle market companies now face a critical dilemma: as the time comes to perform audits or self-attestation reports, do they report these potential lapses to regulatory or industry entities, such as the SEC, PCI Council, HHS, DoD or FINRA, knowing that could ultimately result in significant reputational and financial damages and, if so, to what extent?
A temporary regulatory grace period is needed, and soon
The decision is a pivotal one for a significant number of middle market companies. To date, regulators have not been showing much sympathy during the pandemic, and a large segment of the middle market finds itself in a no man’s land. If they had not made these decisions to continue business operations as best they could, they would have gone out of business. But now, if they do report these violations, the related fines and penalties will likely result in the same fate.
A solution for this crucial predicament is a potential temporary regulatory grace period. Regulatory bodies or lawmakers could establish a window of opportunity for organizations to self-identify the type and duration of their non-compliance, what investigations were done to determine that no harm came to pass, and what steps were, or will be, taken to address the issue.
Currently, the concept of a regulatory grace period is slowly gaining traction in Washington, but time is of the essence. Middle market companies are quickly approaching the time when they will have to determine just what to disclose during these upcoming attestation periods.
Companies understand that mistakes were made, but those issues would not have arisen under normal circumstances. The COVID-19 pandemic is an unprecedented event that companies could have never planned for. Business operations and personal safety initially consumed management’s thought processes as companies scrambled to keep the lights on.
Ultimately, many companies made the right decisions from a business perspective to keep people working and avoid suffering a data breach, even in a heightened environment of data security risks. Any grace period would not absolve the organization of responsibility for any regulatory exposures. For example, if a weakness has not already been identified and addressed, the company could still be subject to fines and other penalties at the conclusion of the amnesty window.
Even a proposed grace period would not mean that middle market companies would be completely out of the woods. Companies often must comply with a host of non-regulatory obligations, and while a grace period may provide some relief from government regulatory agencies, it would not solve similar challenges that may arise related to industry regulations, such as PCI or lapses in third-party agreements.
But a grace period from legislators could be a significant positive first step and potentially represent a blueprint for other bodies. Without some kind of lifeline, many middle market companies that disclose their temporary compliance gaps would likely be unable to continue operations and a significant amount of jobs subsequently may be lost.
Mark Sangster, VP and Industry Security Strategist at eSentire, is a cybersecurity evangelist who has spent significant time researching and speaking to peripheral factors influencing the way that legal firms integrate cybersecurity into their day-to-day operations. In this interview, he discusses MDR services and the MDR market.
What are the essential building blocks of a robust MDR service?
Managed Detection and Response (MDR) must combine two elements. The first is an aperture that can collect the full spectrum of telemetry. This means not only monitoring the network through traditional logging and perimeter defenses but also collecting security telemetry from endpoints, cloud services and connected IoT devices.
The wider the aperture, the more light, or signal. This creates the need for rapid ingestion of a growing volume of data, while doing so in near real-time, to aid rapid detection.
The second element is the ability to respond beyond simple alerting. This means the ability to disrupt north and south traffic at the TCP/IP, DNS and geo-fencing levels. It can disrupt application layer traffic or at least block specific applications. Encompassing the ability to perform endpoint forensics to determine integrity of accessed data and systems and the ability to quarantine devices from endpoints to industrial IoT devices and other operational systems, such as medical diagnosis and patient-management systems.
What makes an MDR service successful?
MDR services require a hyper-vigilance with the ability to scale and rapidly adapt to secure emerging technology. This includes OT-based systems beyond the typical auspices of IT. It also requires an ecosystem of talent: working with universities to guide curriculum, training programs, certification maintenance and work paths through Security Operations Center (SOC) and into threat intelligence and lab work.
The MDR market is becoming more competitive and the number of providers continues to grow. What is the best approach for choosing an MDR provider?
Like any vendor selection, it is more about determining your requirements than picking vendors based on boasts or comprehensive data sheets. It means testing vendor capabilities and carefully matching them to your requirements. For example, if you don’t have internal forensics capabilities, then a vendor that is good at detection but only provides alerts won’t solve your problem.
Find a vendor that provides full services and matches your internal capabilities.
How do you see the MDR market evolving in the near future? What are organizations looking for?
More and more, companies will move to outsourced SOC-like services. This means MDR firms need to up their game, and a tighter definition must come into play to weed out pretender firms. Too much rests on their capabilities.
MDR vendors also need to focus on emerging tech (5G, IIoT, etc.) and be prepared to defend against larger adversaries, like organized criminal elements and state-sponsored actors who now troll the midmarket space.
Organizations are often forced to make critical security decisions based on threat data that is not accurate, relevant and fresh, a Neustar report reveals.
Just 60% of cybersecurity professionals surveyed indicate that the threat data they receive is both timely and actionable, and only 29% say the data they receive is both extremely accurate and relevant to the threats their organization is facing at that moment.
Few orgs basing decisions on near real-time data
With regard to the timeliness of threat data, only 27% of organizations are able to base their security decisions on near real-time data, while 25% say they receive updates hourly and another 24% receive updates several times per day.
“With the pandemic exacerbating the sheer volume of threats and the nature of remote workforces creating a broader range of vulnerabilities, it is more critical than ever that organizations have access to actionable, contextualized, near real-time threat data to power the network and application security tools they use to detect and block malicious actors,” said Rodney Joffe, Senior VP, Security CTO, Fellow at Neustar.
“A timely, actionable and highly relevant security threat data feed can help deliver curated insights to security teams, allowing them to better identify and mitigate risks such as malicious domain generation algorithms, suspicious DNS tunneling attempts, sudden activity by domains with little or no history, and hijacked or spoofed domains.”
Greatest concerns for security pros
According to the report, 37% of organizations state that they have been the victim of a successful domain spoofing attempt or domain hacking attempt (31%) within the last 12 months.
Findings from the latest NISC research also highlighted a 12.4-point year-on-year increase in the International Cyber Benchmarks Index. Calculated based on the changing level of threats and impact of cyberattacks, the index has maintained an upward trend since May 2017.
During July and August 2020, system compromise and distributed denial-of-service attacks (both 21%) were ranked as the greatest concerns for security professionals, followed by ransomware (20%) and theft of intellectual property (17%).
During this period, targeted hacking (63%) was most likely to be perceived as an increasing threat to organizations, followed by ransomware and DDoS attacks (both 62%). In this round of the survey, 72% of participating enterprises indicated that they had been on the receiving end of a DDoS attack at some point, compared to an average of 52% over the 20 survey rounds.
While almost 95 percent of cybersecurity issues can be traced back to human error, such as accidentally clicking on a malicious link, most governments have not invested enough to educate their citizens about the risks, according to a report from the Oliver Wyman Forum.
Cyber risk literacy of the population
Cyber literacy, along with financial literacy, is a new 21st century priority for governments, educational institutions, and businesses.
“The situation has become even more pressing during the pandemic as our reliance on the internet has grown. Yet many citizens still lack the basic skills to keep themselves, their communities, and their employers safe.”
50 geographies were assessed, including the European Union, on the present cyber risk literacy of its population, and the nature of related education and training available to promote and enable future cyber risk literacy.
Specifically, the Index measures five key drivers of cyber risk literacy and education: the public’s motivation to practice good cybersecurity hygiene; government policies to improve cyber literacy; how well cyber risks are addressed by education systems; how well businesses are raising their employees cyber skills, and the degree to which digital access and skills are shared broadly within the population.
How are assessed countries doing?
Switzerland, Singapore and the UK topped the list because of their strong government policies, education systems and training, practical follow through and metrics as well as population motivation to reduce risk.
Switzerland, the number one ranked country, has a comprehensive implementation document that lays out specific responsibilities along with what national or provincial legislation is required. Specific milestones are set, and timelines are assigned to ensure accountability regardless of who oversees the government.
Singapore, which is ranked second, has prioritized cybersecurity education efforts from early childhood to retirees. It established the Cyber Security Agency of Singapore to keep its cyberspace safe and secure. Its cyber wellness courses occur over multiple grades and focus on social and practical safety tips such as understanding cyber bullying.
The UK ranked third, has the most integrated cyber system because it incorporates cyber risk into both primary and secondary education. The UK’s National Cyber Security Strategy of 2016-2021 is also one of the strongest plans globally. The US ranked 10th.
Countries that rank lower lack an overall national strategy and fail to emphasize cyber risk in schools. Some countries in emerging markets are only beginning to identify cybersecurity as a national concern.
“Governments that want to improve the cyber risk literacy of their citizens can use the index to strengthen their strategy by way of adopting new mindsets, trainings, messaging, accessibility and best practices,” Mee added. “With most children using the internet by the age of four, it is never too early to start teaching your citizens to protect themselves.”
Earlier this year, businesses across the globe transitioned to a remote work environment almost overnight at unprecedented scale and speed. Security teams worked around the clock to empower and protect their newly distributed teams.
Protect and support a remote workforce
Cisco’s report found the majority of organizations around the world were at best only somewhat prepared in supporting their remote workforce. But, it has accelerated the adoption of technologies that enable employees to work securely from anywhere and on any device – preparing businesses to be flexible for whatever comes next. The survey found that:
- 85% of organizations said that cybersecurity is extremely important or more important than before COVID-19
- Secure access is the top cybersecurity challenge faced by the largest proportion of organizations (62%) when supporting remote workers
- One in two respondents said endpoints, including corporate laptops and personal devices, are a challenge to protect in a remote environment
- 66% of respondents indicated that the COVID-19 situation will result in an increase in cybersecurity investments
“Security and privacy are among the most significant social and economic issues of our lifetime,” said Jeetu Patel, SVP and GM of Cisco’s Security & Applications business.
“Cybersecurity historically has been overly complex. With this new way of working here to stay and organizations looking to increase their investment in cybersecurity, there’s a unique opportunity to transform the way we approach security as an industry to better meet the needs of our customers and end-users.”
People worried about the privacy of their tools
People are worried about the privacy of remote work tools and are skeptical whether companies are doing what is needed to keep their data safe. Despite the pandemic, they want little or no change to privacy requirements, and they want to see companies be more transparent regarding how they use their customer’s data.
Organizations have the opportunity to build confidence and trust by embedding privacy into their products and communicating their practices clearly and simply to their customers. The survey found that:
- 60% of respondents were concerned about the privacy of remote collaboration tools
- 53% want little or no change to existing privacy laws
- 48% feel they are unable to effectively protect their data today, and the main reason is that they can’t figure out what companies are doing with their data
- 56% believe governments should play a primary role in protecting consumer data, and consumers are highly supportive of the privacy laws enacted in their country
“Privacy is much more than just a compliance obligation. It is a fundamental human right and business imperative that is critical to building and maintaining customer trust,” said Harvey Jang, VP, Chief Privacy Officer, Cisco. “The core privacy and ethical principles of transparency, fairness, and accountability will guide us in this new, digital-first world.”
CIOs and IT leaders who use composability to deal with continuing business disruption due to the COVID-19 pandemic and other factors will make their enterprises more resilient, more sustainable and make more meaningful contributions, according to Gartner.
Analysts said that composable business means architecting for resilience and accepting that disruptive change is the norm. It supports a business that exploits the disruptions digital technology brings by making things modular – mixing and matching business functions to orchestrate the proper outcomes.
It supports a business that senses – or discovers – when change needs to happen; and then uses autonomous business units to creatively respond.
For some enterprises digital strategies became real for the first time
According to the 2021 Gartner Board of Directors survey, 69% of corporate directors want to accelerate enterprise digital strategies and implementations to help deal with the ongoing disruption. For some enterprises that means that their digital strategies became real for the first time, and for others that means rapidly scaling digital investments.
“Composable business is a natural acceleration of the digital business that organizations live every day,” said Daryl Plummer, research VP, Chief of Research and Gartner Fellow. “It allows organizations to finally deliver the resilience and agility that these interesting times demand.”
Don Scheibenreif, research VP at Gartner, explained that composable business starts with three building blocks — composable thinking, which ensures creative thinking is never lost; composable business architecture, which ensure flexibility and resiliency; and composable technologies, which are the tools for today and tomorrow.
“The world today demands something different from us. Composing – flexible, fluid, continuous, even improvisational – is how we will move forward. That is why composable business is more important than ever,” said Mr. Scheibenreif.
“During the COVID-19 pandemic crisis, most CIOs leveraged their organizations existing digital investments, and some CIOs accelerated their digital strategies by investing in some of the three composable building blocks,” said Tina Nunno, research VP and Gartner Fellow.
“To ensure their organizations were resilient, many CIOs also applied at least one of the four critical principles of composability, gaining more speed through discovery, greater agility through modularity, better leadership through orchestration, and resilience through autonomy.”
Composable business resilience
Analysts said that these four principles can be viewed differently depending on which building block organizations are working with:
- In composable thinking, these are design principles. They guide an organization’s approach to conceptualizing what to compose, and when.
- In composable business architecture, they are structural capabilities, giving an organization the mechanisms to use in architecting its business.
- In composable technologies, they are product design goals driving the features of technology that support the notions of composability.
“In the end, organizations need the principles and the building blocks to intentionally make composability real,” said Mr. Plummer.
The building blocks of composability can be used to pivot quickly to a new opportunity, industry, customer base or revenue stream. For example, a large Chinese retailer used composability when the pandemic hit to help re-architect their business. They used composable thinking and chose to pivot to live streaming sales activities.
They embraced social marketing technology and successfully retained over 5,000 in-store sales and customer support staff to become live streaming hosts. The retailer suffered no layoffs and minimal revenue loss.
“Throughout 2020, CIOs and IT leaders maintained their composure and delivered tremendous value,” said Ms. Nunno. “The next step is to create a more composable business using the three building blocks and applying the four principles. With composability, organizations can achieve digital acceleration, greater resiliency and the ability to innovate through disruption.”
Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.
What are email impersonation attacks?
Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.
Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.
Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.
Tip #1 – Look out for social engineering cues
Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.
Here are some common phrases and situations you should look out for in impersonation emails:
- Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
- Unusual purchase requests (e.g., iTunes gift cards).
- Employees requesting sudden changes to direct deposit information.
- Vendor sharing new.
This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.
Tip #2 – Always do a context check on emails
Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.
- Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
- Why would Netflix emails come to your business email address?
- Why would the IRS ask for your SSN and other sensitive personal information over email?
To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.
Tip #3 – Check for email address and sender name deviations
To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:
- Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
- Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
- Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
- Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
- Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).
Tip #4 – Learn the “greatest hits” of impersonation phrases
Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:
- “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
- “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
- “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.
Tip #5 – Use secondary channels of authentication
Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.
Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:
- Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
- Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
- Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.
Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.
These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.
With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.
A failing cybersecurity market is contributing to ineffective performance of cybersecurity technology, a Debate Security research reveals.
Based on over 100 comprehensive interviews with business and cybersecurity leaders from large enterprises, together with vendors, assessment organizations, government agencies, industry associations and regulators, the research shines a light on why technology vendors are not incentivized to deliver products that are more effective at reducing cyber risk.
The report supports the view that efficacy problems in the cybersecurity market are primarily due to economic issues, not technological ones. The research addresses three key themes and ultimately arrives at a consensus for how to approach a new model.
Cybersecurity technology is not as effective as it should be
90% of participants reported that cybersecurity technology is not as effective as it should be when it comes to protecting organizations from cyber risk. Trust in technology to deliver on its promises is low, and yet when asked how organizations evaluate cybersecurity technology efficacy and performance, there was not a single common definition.
Pressure has been placed on improving people and process related issues, but ineffective technology has become accepted as normal – and shamefully – inevitable.
The underlying problem is one of economics, not technology
92% of participants reported that there is a breakdown in the market relationship between buyers and vendors, with many seeing deep-seated information asymmetries.
Outside government, few buyers today use detailed, independent cybersecurity efficacy assessment as part of their cybersecurity procurement process, and not even the largest organizations reported having the resources to conduct all the assessments themselves.
As a result, vendors are incentivized to focus on other product features, and on marketing, deprioritizing cybersecurity technology efficacy – one of several classic signs of a “market for lemons”.
Coordinated action between stakeholders only achieved through regulation
Unless buyers demand greater efficacy, regulation may be the only way to address the issue. Overcoming first-mover disadvantages will be critical to fixing the broken cybersecurity technology market.
Many research participants believe that coordinated action between all stakeholders can only be achieved through regulation – though some hold out hope that coordination could be achieved through sectoral associations.
In either case, 70% of respondents feel that independent, transparent assessment of technology would help solve the market breakdown. Setting standards on technology assessment rather than on technology itself could prevent stifling innovation.
Defining cybersecurity technology efficacy
Participants in this research broadly agree that four characteristics are required to comprehensively define cybersecurity technology efficacy.
To be effective, cybersecurity solutions need to have the capability to deliver the stated security mission (be fit-for-purpose), have the practicality that enterprises need to implement, integrate, operate and maintain them (be fit-for-use), have the quality in design and build to avoid vulnerabilities and negative impact, and the provenance in the vendor company, its people and supply chain such that these do not introduce additional security risk.
“In cybersecurity right now, trust doesn’t always sell, and good security doesn’t always sell and isn’t always easy to buy. That’s a real problem,” said Ciaran Martin, advisory board member, Garrison Technology.
“Why we’re in this position is a bit of a mystery. This report helps us understand it. Fixing the problem is harder. But our species has fixed harder problems and we badly need the debate this report calls for, and industry-led action to follow it up.”
“Company boards are well aware that cybersecurity poses potentially existential risk, but are generally not well equipped to provide oversight on matters of technical detail,” said John Cryan, Chairman Man Group.
“Boards are much better equipped when it comes to the issues of incentives and market dynamics revealed by this research. Even if government regulation proves inevitable, I would encourage business leaders to consider these findings and to determine how, as buyers, corporates can best ensure that cybersecurity solutions offered by the market are fit for purpose.”
“As a technologist and developer of cybersecurity products, I really feel for cybersecurity professionals who are faced with significant challenges when trying to select effective technologies,” said Henry Harrison, CSO of Garrison Technology.
“We see two noticeable differences when selling to our two classes of prospects. For security-sensitive government customers, technology efficacy assessment is central to buying behavior – but we rarely see anything similar when dealing with even the most security-sensitive commercial customers. We take from this study that in many cases this has less to do with differing risk appetites and more to do with structural market issues.”
Trustwave released a report which depicts how technology trends, compromise risks and regulations are shaping how organizations’ data is stored and protected.
Data protection strategy
The report is based on a recent survey of 966 full-time IT professionals who are cybersecurity decision makers or security influencers within their organizations.
Over 75% of respondents work in organizations with over 500 employees in key geographic regions including the U.S., U.K., Australia and Singapore.
“Our findings illustrate organizations are under enormous pressure to secure data as workloads migrate off-premises, attacks on cloud services increases and ransomware evolves. Gaining complete visibility of data either at rest or in motion and eliminating threats as they occur are top cybersecurity challenges all industries are facing.”
More sensitive data moving to the cloud
Types of data organizations are moving into the cloud have become increasingly sensitive, therefore a solid data protection strategy is crucial. Ninety-six percent of total respondents stated they plan to move sensitive data to the cloud over the next two years with 52% planning to include highly sensitive data with Australia at 57% leading the regions surveyed.
Not surprisingly, when asked to rate the importance of securing data regarding digital transformation initiatives, an average score of 4.6 out of a possible high of five was tallied.
Hybrid cloud model driving digital transformation and data storage
Of those surveyed, most at 55% use both on-premises and public cloud to store data with 17% using public cloud only. Singapore organizations use the hybrid cloud model most frequently at 73% or 18% higher than the average and U.S. organizations employ it the least at 45%.
Government respondents store data on-premises only the most at 39% or 11% higher than average. Additionally, 48% of respondents stored data using the hybrid cloud model during a recent digital transformation project with only 29% relying solely on their own databases.
Most organizations use multiple cloud services
Seventy percent of organizations surveyed were found to use between two and four public cloud services and 12% use five or more. At 14%, the U.S. had the most instances of using five or more public cloud services followed by the U.K. at 13%, Australia at 9% and Singapore at 9%. Only 18% of organizations queried use zero or just one public cloud service.
Perceived threats do not match actual incidents
Thirty-eight percent of organizations are most concerned with malware and ransomware followed by phishing and social engineering at 18%, application threats 14%, insider threats at 9%, privilege escalation at 7% and misconfiguration attack at 6%.
Interestingly, when asked about actual threats experienced, phishing and social engineering came in first at 27% followed by malware and ransomware at 25%. The U.K. and Singapore experienced the most phishing and social engineering incidents at 32% and 31% and the U.S. and Australia experienced the most malware and ransomware attacks at 30% and 25%.
Respondents in the government sector had the highest incidents of insider threats at 13% or 5% above the average.
Patching practices show room for improvement
A resounding 96% of respondents have patching policies in place, however, of those, 71% rely on automated patching and 29% employ manual patching. Overall, 61% of organizations patched within 24 hours and 28% patched between 24 and 48 hours.
The highest percentage patching within a 24-hour window came from Australia at 66% and the U.K. at 61%. Unfortunately, 4% of organizations took a week to over a month to patch.
Reliance on automation driving key security processes
In addition to a high percentage of organizations using automated patching processes, findings show 89% of respondents employ automation to check for overprivileged users or lock down access credentials once an individual has left their job or changed roles.
This finding correlates to low concern for insider threats and data compromise due to privilege escalation according to the survey. Organizations must exercise caution when assuming removal of user access to applications to also include databases, which is often not the case.
Data regulations having minor impact on database security strategies
These findings may suggest a lack of alignment between information technology and other departments, such as legal, responsible for helping ensure stipulations like ‘the right to be forgotten’ are properly enforced to avoid severe penalties.
Small teams with big responsibilities
Of those surveyed, 47% had a security team size of only six to 15 members. Respondents from Singapore had the smallest teams with 47% reporting between one and ten members and the U.S. had the largest teams with 22% reporting team size of 21 or more, 2% higher than the average.
Thirty-two percent of government respondents surprisingly run security operations with teams between just six and ten members.