2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on.
After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal basis for cross border data transfers.
Severe fines levied for General Data Protection Regulation (GDPR) non-compliance showed organizations that the regulation is far from toothless and that data protection authorities are not easing up just because there is an ongoing global pandemic.
What can we expect in 2021? Undoubtedly, the number of data privacy cases brought before the courts will continue to rise. That’s not necessarily a bad thing: with each case comes additional clarity and precedent on many different areas of the regulation that, to date, is open to interpretation and conjecture.
Last time I spoke to the UK Information Commissioner’s Office regarding a technicality surrounding data subject access requests (DSARs) submitted by a representative, I was told that I was far from the only person enquiring about it, and this only illustrates some of the ambiguities faced by those responsible for implementing and maintaining compliance.
Of course, this is just the GDPR. There are many other data privacy legislative frameworks to consider. We fully expect 2021 to bring full and complete alignment of the ePrivacy Regulations with GDPR, and eradicate the conflict that exists today, particularly around consent, soft opt-in, etc., where the GDPR is very clear but the current Privacy and Electronic Communication Regulation (PECR) not quite so much.
These are just inside Europe but across the globe we’re seeing continued development of data localization laws, which organizations are mandated to adhere to. In the US, the California Consumer Privacy Act (CCPA) has kickstarted a swathe of data privacy reforms within many states, with many calls for something similar at the federal level.
The following year(s) will see that build and, much like with the GDPR, precedent-setting cases are needed to provide more clarity regarding the rules. Will Americans look to replace the shattered Privacy Shield framework, or will they adopt Standard Contractual Clauses (SCCs) more widely? SCCs are a very strong legal basis, providing the clauses are updated to align with the GDPR (something else we’d expect to see in 2021), and I suspect the US will take this road as the realization of the importance of trade with the EU grows.
Other noteworthy movements in data protection laws are happening in Russia with amendments to the Federal Law on Personal Data, which is taking a closer look at TLS as a protective measure, and in the Philippines, where the Personal Data Protection Act 2021 (PDPA) is being replaced by a new bill (currently a work in progress, but it’s coming).
One of the biggest events of 2021 will be the UK leaving the EU. The British implementation of the GDPR comes in the form of the UK Data Protection Bill 2018. Aside from a few deregulations, it’s the GDPR and that’s great… as far as it goes. Having strong local data privacy laws is good, but after enjoying 47 years (at the time of writing) of free movement within the Union, how will being outside of the EU impact British business?
It is thought and hoped that the UK will be granted an adequacy decision fairly swiftly, given that historically local UK laws aligned with those inside the Union, but there is no guarantee. The uncertainty around how data transfers will look in future might result in the British industry using more SCCs. The currently low priority plans to make Binding Corporate Rules (BCR) easier and more affordable will come sharply to the fore as the demand for them goes up.
One thing is certain, it’s going to be a fascinating year for data privacy and we are excited to see clearer definitions, increased certification, precedent-setting case law and whatever else unfolds as we continue to navigate a journey of governance, compliance and security.
Trustwave released a report which depicts how technology trends, compromise risks and regulations are shaping how organizations’ data is stored and protected.
Data protection strategy
The report is based on a recent survey of 966 full-time IT professionals who are cybersecurity decision makers or security influencers within their organizations.
Over 75% of respondents work in organizations with over 500 employees in key geographic regions including the U.S., U.K., Australia and Singapore.
“Our findings illustrate organizations are under enormous pressure to secure data as workloads migrate off-premises, attacks on cloud services increases and ransomware evolves. Gaining complete visibility of data either at rest or in motion and eliminating threats as they occur are top cybersecurity challenges all industries are facing.”
More sensitive data moving to the cloud
Types of data organizations are moving into the cloud have become increasingly sensitive, therefore a solid data protection strategy is crucial. Ninety-six percent of total respondents stated they plan to move sensitive data to the cloud over the next two years with 52% planning to include highly sensitive data with Australia at 57% leading the regions surveyed.
Not surprisingly, when asked to rate the importance of securing data regarding digital transformation initiatives, an average score of 4.6 out of a possible high of five was tallied.
Hybrid cloud model driving digital transformation and data storage
Of those surveyed, most at 55% use both on-premises and public cloud to store data with 17% using public cloud only. Singapore organizations use the hybrid cloud model most frequently at 73% or 18% higher than the average and U.S. organizations employ it the least at 45%.
Government respondents store data on-premises only the most at 39% or 11% higher than average. Additionally, 48% of respondents stored data using the hybrid cloud model during a recent digital transformation project with only 29% relying solely on their own databases.
Most organizations use multiple cloud services
Seventy percent of organizations surveyed were found to use between two and four public cloud services and 12% use five or more. At 14%, the U.S. had the most instances of using five or more public cloud services followed by the U.K. at 13%, Australia at 9% and Singapore at 9%. Only 18% of organizations queried use zero or just one public cloud service.
Perceived threats do not match actual incidents
Thirty-eight percent of organizations are most concerned with malware and ransomware followed by phishing and social engineering at 18%, application threats 14%, insider threats at 9%, privilege escalation at 7% and misconfiguration attack at 6%.
Interestingly, when asked about actual threats experienced, phishing and social engineering came in first at 27% followed by malware and ransomware at 25%. The U.K. and Singapore experienced the most phishing and social engineering incidents at 32% and 31% and the U.S. and Australia experienced the most malware and ransomware attacks at 30% and 25%.
Respondents in the government sector had the highest incidents of insider threats at 13% or 5% above the average.
Patching practices show room for improvement
A resounding 96% of respondents have patching policies in place, however, of those, 71% rely on automated patching and 29% employ manual patching. Overall, 61% of organizations patched within 24 hours and 28% patched between 24 and 48 hours.
The highest percentage patching within a 24-hour window came from Australia at 66% and the U.K. at 61%. Unfortunately, 4% of organizations took a week to over a month to patch.
Reliance on automation driving key security processes
In addition to a high percentage of organizations using automated patching processes, findings show 89% of respondents employ automation to check for overprivileged users or lock down access credentials once an individual has left their job or changed roles.
This finding correlates to low concern for insider threats and data compromise due to privilege escalation according to the survey. Organizations must exercise caution when assuming removal of user access to applications to also include databases, which is often not the case.
Data regulations having minor impact on database security strategies
These findings may suggest a lack of alignment between information technology and other departments, such as legal, responsible for helping ensure stipulations like ‘the right to be forgotten’ are properly enforced to avoid severe penalties.
Small teams with big responsibilities
Of those surveyed, 47% had a security team size of only six to 15 members. Respondents from Singapore had the smallest teams with 47% reporting between one and ten members and the U.S. had the largest teams with 22% reporting team size of 21 or more, 2% higher than the average.
Thirty-two percent of government respondents surprisingly run security operations with teams between just six and ten members.
The importance of privacy and data protection is a critical issue for organizations as it transcends beyond legal departments to the forefront of an organization’s strategic priorities.
A FairWarning research, based on survey results from more than 550 global privacy and data protection, IT, and compliance professionals outlines the characteristics and behaviors of advanced privacy and data protection teams.
By examining the trends of privacy adoption and maturity across industries, the research uncovers adjustments that security and privacy leaders need to make to better protect their organization’s data.
The prevalence of data and privacy attacks
Insights from the research reinforce the importance of privacy and data protection as 67% of responding organizations documented at least one privacy incident within the past three years, and over 24% of those experienced 30 or more.
Additionally, 50% of all respondents reported at least one data breach in the last three years, with 10% reporting 30 or more.
Overall immaturity of privacy programs
Despite increased regulations, breaches and privacy incidents, organizations have not rapidly accelerated the advancement of their privacy programs as 44% responded they are in the early stages of adoption and 28% are in middle stages.
Healthcare and software rise to the top
Despite an overall lack of maturity across industries, healthcare and software organizations reflect more maturity in their privacy programs, as compared to insurance, banking, government, consulting services, education institutions and academia.
Harnessing the power of data and privacy programs
Respondents understand the significant benefits of a mature privacy program as organizations experience greater gains across every area measured including: increased employee privacy awareness, mitigating data breaches, greater consumer trust, reduced privacy complaints, quality and innovation, competitive advantage, and operational efficiency.
Of note, more mature companies believe they experience the largest gain in reducing privacy complaints (30.3% higher than early stage respondents).
Attributes and habits of mature privacy and data protection programs
Companies with more mature privacy programs are more likely to have C-Suite privacy and security roles within their organization than those in the mid- to early-stages of privacy program development.
Additionally, 88.2% of advanced stage organizations know where most or all of their personally identifiable information/personal health information is located, compared to 69.5% of early stage respondents.
Importance of automated tools to monitor user activity
Insights reveal a clear distinction between the maturity levels of privacy programs and related benefits of automated tools as 54% of respondents with more mature programs have implemented this type of technology compared with only 28.1% in early stage development.
Automated tools enable organizations to monitor all user activity in applications and efficiently identify anomalous activity that signals a breach or privacy violation.
“It is exciting to see healthcare at the top when it comes to privacy maturity. However, as we dig deeper into the data, we find that 37% of respondents with 30 or more breaches are from healthcare, indicating that there is still more work to be done.
“This study highlights useful guidance on steps all organizations can take regardless of industry or size to advance their program and ensure they are at the forefront of privacy and data protection.”
“As the research has demonstrated, it is imperative that security and privacy professionals recognize the importance of implementing privacy and data protection programs to not only reduce privacy complaints and data breaches, but increase operational efficiency.”
Increasingly demanded by consumers, data privacy laws can create onerous burdens on even the most well-meaning businesses. California presents plenty of evidence to back up this statement, as more than half of organizations that do business in California still aren’t compliant with the California Consumer Privacy Act (CCPA), which went into effect earlier this year.
As companies struggle with their existing compliance requirements, many fear that a new privacy ballot initiative – the California Privacy Rights Act (CPRA) – could complicate matters further. While it’s true that if passed this November, the CPRA would fundamentally change the way businesses in California handle both customer and employee data, companies shouldn’t panic. In fact, this law presents an opportunity for organizations to change their relationship with employee data to their benefit.
CPRA, the Californian GDPR?
Set to appear on the November 2020 ballot, the CPRA, also known as CCPA 2.0 or Prop 24 (its name on the ballot), builds on what is already the most comprehensive data protection law in the US. In essence, the CPRA will bring data protection in California nearer to the current European legal standard, the General Data Protection Regulation (GDPR).
In the process of “getting closer to GDPR,” the CCPA would gain substantial new components. Besides enhancing consumer rights, the CPRA also creates new provisions for employee data as it relates to their employers, as well as data that businesses collect from B2B business partners.
Although controversial, the CPRA is likely to pass. August polling shows that more than 80% of voters support the measure. However, many businesses do not. This is because, at first glance, the CPRA appears to create all kinds of legal complexities in how employers can and cannot collect information from workers.
Fearful of having to meet the same demanding requirements as their European counterparts, many organizations’ natural reaction towards the prospect of CPRA becoming law is fear. However, this is unfounded. In reality, if the CPRA passes, it might not be as scary as some businesses think.
CPRA and employment data
The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations.
Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners.
However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements.
Nonetheless, employers should act now
While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.
This is especially pertinent now that businesses are collecting more data than ever on their employees. With companies like the workplace monitoring company Prodoscore reporting that interest from prospective customers rose by 600% since the pandemic began, we are seeing rapid growth in companies looking to monitor how, where, and when their employees work.
This trend emphasizes the fact that the information flow between companies and their employees is mostly one-sided (i.e., from the worker to the employer). Currently, businesses have no legal requirement to be transparent about this information exchange. That will change for California-based companies if the CPRA comes into effect and they will have no choice but to disclose the type of data they’re collecting about their staff.
The only sustainable solution for impacted businesses is to be transparent about their data collection with employees and work towards creating a “culture of privacy” within their organization.
Creating a culture of privacy
Rather than viewing employee data privacy as some perfunctory obligation where the bare minimum is done for the sake of appeasing regulators, companies need to start thinking about worker privacy as a benefit. Presented as part of a benefits package, comprehensive privacy protection is a perk that companies can offer prospective and existing employees.
Privacy benefits can include access to privacy protection services that give employees privacy benefits beyond the workplace. Packaged alongside privacy awareness training and education, these can create privacy plus benefits that can be offered to employees alongside standard perks like health or retirement plans. Doing so will build a culture of privacy which can help companies ensure they’re in regulatory compliance, while also making it easier to attract qualified talent and retain workers.
It’s also worth bearing in mind that creating a culture of privacy doesn’t necessarily mean that companies have to stop monitoring employee activity. In fact, employees are less worried about being watched than they are by the possibility of their employers misusing their data. Their fears are well-founded. Although over 60% of businesses today use workforce data, only 3 in 10 business leaders are confident that this data is treated responsibly.
For this reason, companies that want to keep employee trust and avoid bad PR need to prioritize transparency. This could mean drawing up a “bill of rights” that lets employees know what data is being collected and how it will be used.
Research into employee satisfaction backs up the value of transparency. Studies show that while only 30% of workers are comfortable with their employer monitoring their email, the number of employees open to the use of workforce data goes up to 50% when the employer explains the reasons for doing so. This number further jumps to 92% if employees believe that data collection will improve their performance or well-being or come with other personal benefits, like fairer pay.
On the other hand, most employees would leave an organization if its leaders did not use workplace data responsibly. Moreover, 55% of candidates would not even apply for a job with such an organization in the first place.
With many exceptions for workplace data management already built-in and more likely to come down the line, most employers should be able to easily navigate the stipulations CPRA entails.
That being said, if it becomes law this November, employers shouldn’t misuse the two-year window they have to prepare for new compliance requirements. Rather than seeing this time as breathing space before a regulatory crackdown, organizations should instead use it to be proactive in their approach to how they manage their employees’ data. As well as just ensuring they comply with the law, businesses should look at how they can turn employee privacy into an asset.
As data privacy stays at the forefront of employees’ minds, businesses that can show they have a genuine privacy culture will be able to gain an edge when it comes to attracting and retaining talent and, ultimately, coming out on top.
Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.
Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.
“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.
Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.
In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.
[Answers have been edited for clarity.]
Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?
The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.
Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.
Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.
While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.
What are the most often exploited vulnerabilities in medical devices?
It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:
Unsecured firmware updating
I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:
- Exposure of the binary executable (i.e., it isn’t encrypted)
- Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
- A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).
Overlooking physical attacks
Physical attack can be mounted:
- Through an unsecured JTAG/SWD debugging port
- Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
- By sniffing internal busses, such as SPI and I2C
- Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)
Manufacturing support left enabled
Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.
No communication authentication
Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.
Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.
I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)
What things must medical device manufacturers keep in mind if they want to produce secure products?
There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.
Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.
To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.
They need to:
- Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
- Know how to securely transfer a device to production and securely manage it once in production
- Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
- Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.
They must avoid mistakes like:
- Not thinking that a medical device needs to be secured
- Assuming their development team ‘can’ and ‘is’ securing their product
- Not designing-in the ability to update the device in the field
- Assuming that all vulnerabilities can be mitigated by a field update
- Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.
Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.
What about the developers? Any advice on skills they should acquire or brush up on?
First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.
And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?
At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.
In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.
What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?
It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.
While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.
But there are some common concepts represented in all these regulations, such as:
- Risk management
- Software bill of materials (SBOM)
- “Total Product Lifecycle”
But if you plan on marketing in the US, the two most important document should be FDA’s:
- 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
- 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?
The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.
Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.
The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.
What initiatives exist to promote medical device cybersecurity?
Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.
And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.
What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?
So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.
Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.
Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.
And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.
HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.
I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.
One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!
Global organizations continue to put their customers’ cardholder data at risk due to a lack of long term payment security strategy and execution, flags the Verizon report.
With many companies struggling to retain qualified CISOs or security managers, the lack of long-term security thinking is severely impacting sustained compliance within the Payment Card Industry Data Security Standard (PCI DSS).
Cybercriminals still mostly targeting payment data
Payment data remains one of the most sought after and lucrative targets by cybercriminals with 9 out of 10 data breaches being financially motivated, as highlighted by the report. Within the retail sector alone, 99 percent of security incidents were focused on acquiring payment data for criminal use.
On average only 27.9 percent of global organizations maintained full compliance with the PCI DSS, which was developed to help businesses that offer card payment facilities protect their payment systems from breaches and theft of cardholder data.
More concerning, this is the third successive year that a decline in compliance has occurred with a 27.5 percentage point drop since compliance peaked in 2016.
“Unfortunately we see many businesses lacking the resources and commitment from senior business leaders to support long-term data security and compliance initiatives. This is unacceptable,” said Sampath Sowmyanarayan, President, Global Enterprise, Verizon Business.
“The recent coronavirus pandemic has driven consumers away from the traditional use of cash to contactless methods of payment with payment cards as well as mobile devices. This has generated more electronic payment data and consumers trust businesses to safeguard their information.
“Payment security has to be seen as an on-going business priority by all companies that handle any payment data, they have a fundamental responsibility to their customers, suppliers and consumers.”
Few organizations successfully test security systems
Additional findings shine a spotlight on security testing where only 51.9 percent of organizations successfully test security systems and processes as well as unmonitored system access and where approximately two-thirds of all businesses track and monitor access to business critical systems adequately.
In addition, only 70.6 percent of financial institutions maintain essential perimeter security controls.
“This report is a welcome wake-up call to organizations that strong leadership is required to address failures to adequately manage payment security. The Verizon Business report aligns well with Omdia’s view that the alignment of security strategy with organizational strategy is essential for organizations to maintain compliance, in this case with PCI DSS 3.2.1 to provide appropriate levels of payment security.
“It makes clear that long-term data security and compliance combines the responsibilities of a number of roles, including the Chief Information Security Officer, the Chief Risk Officer, and Chief Compliance Officer, which Omdia concurs with,” comments Maxine Holt, senior research director at Omdia.
Difficulty to maintain PCI DSS compliance impacts all businesses
SMBs were flagged as having their own unique struggles with securing payment data. While smaller businesses generally have less card data to process and store than larger businesses, they have fewer resources and smaller budgets for security, impacting the resources available to maintain compliance with PCI DSS.
Often the measures needed to protect sensitive payment card data are perceived as too time-consuming and costly by these smaller organizations, but as the likelihood of a data breach for SMBs remains high it is imperative that PCI DSS compliance is maintained.
The on-going CISO challenge: Security strategy and compliance
The report also explores the challenges CISOs face in designing, implementing and maintaining an effective and sustainable security strategy, and how these can ultimately contribute to the breakdown of compliance and data security management.
These problems were not found to be technological in nature, but as a result of organizational weaknesses which could be resolved by more mature management skills including creating formalized processes; building a business model for security as well as defining a sound security strategy with operating models and frameworks.
ManageEngine unveiled findings from a report that analyzes behaviors related to personal and professional online usage patterns.
Security restrictions on corporate devices
The report combines a series of surveys conducted among nearly 1,500 employees amid the pandemic as many people were accelerating online usage due to remote work and stay-at-home orders. The findings evaluate users’ web browsing habits, opinions about AI-based recommendations, and experiences with chatbot-based customer service.
“This research illuminates the challenges of unsupervised employee behaviors, and the need for behavioral analytics tools to help ensure business security and productivity,” said Rajesh Ganesan, vice president at ManageEngine.
“While IT teams have played a crucial role in supporting remote work and business continuity during the pandemic, now is an important time to evaluate the long-term effectiveness of current strategies and augment data analytics to IT operations that will help sustain seamless, secure operations.”
Risky online behaviors could compromise corporate data and devices
Interestingly, 37% of those respondents also say that there are no security restrictions on these corporate devices. Therefore, risky online activities such as visiting unsecured websites, sharing personal information, and downloading third-party software could pose potential threats.
For example, 54% said they would still visit a website after receiving a warning about potential insecurities. This percentage is also significantly higher among younger generations – including 42% of people 18-24 years and 40% of 25-34 years.
Remote work has its hiccups, but IT teams have been responsive
79% of respondents say they experience at least one technology issue weekly while working from home. The most common issues include slowed functionality and download speeds (40%) and reliable connectivity (25%).
However, IT teams have been committed to solving these challenges. For example, 75% of respondents say it’s been easy to communicate with their IT teams to resolve these issues. Chatbots, AI, and automation are becoming increasingly more effective and trusted.
76% said their experience with chatbot-based support has been “excellent” or “satisfactory,” and 55% said their issue was resolved in a timely manner. As it relates to artificial intelligence, 67% say they trust these solutions to make recommendations for them.
The increasing comfort with automation technologies can help IT teams support both front and back-end business functions, especially during times of increased online activities due to the pandemic.
There are growing privacy concerns among Americans due to COVID-19 with nearly 70 percent citing they would likely sever healthcare provider ties if they found that their personal health data was unprotected, a CynergisTek survey reveals.
And as many employers seek to welcome staff back into physical workplaces, nearly half (45 percent) of Americans expressed concerns about keeping personal health information private from their employer.
“As healthcare systems and corporations continue to grapple with data challenges associated with COVID-19 – whether that’s more sophisticated, targeted cyber-attacks or the new requirements around interoperability and data sharing, concerns around personal data and consumer awareness of privacy rights will only continue to grow,” said Caleb Barlow, president and CEO of CynergisTek.
Patients contemplate cutting ties over unprotected health data
While many still assume personal data is under lock and key, 18 percent of Americans are beginning to question whether personal health data is being adequately protected by healthcare providers. In fact, 47.5 percent stated they were unlikely to use telehealth services again should a breach occur, sounding the alarm for a burgeoning telehealth industry predicted to be worth over $260B by 2026.
While 3 out of 4 Americans still largely trust their data is properly protected by their healthcare provider, tolerance is beginning to wane with 67 percent stating they would change providers if it was found that their data was not properly protected. When drilling deeper into certain age groups and health conditions, the survey also found that:
- Gen X (73 percent) and Millennials (70 percent) proved even less tolerant compared to other demographics when parting ways with their providers due to unprotected health data.
- 66 percent of Americans living with chronic health conditions stated they would be willing to change up care providers should their data be compromised.
Data shows that health systems who have not invested the time, money and resources to keep pace with the ever-changing threat landscape are falling behind. Of the nearly 300 healthcare facilities assessed, less than one half met NIST Cybersecurity Framework guidelines.
Concern about sharing COVID-19 health data upon returning to work
As pressures mount for returning employees to disclose COVID-19 health status and personal interactions, an increasing conflict between ensuring public health safety and upholding employee privacy is emerging.
This is increasingly evident with 45 percent stating a preference to keep personal health information private from their employer, shining a light on increased scrutiny among employees with over 1 in 3 expressing concerns about sharing COVID-19 specific health data, e.g. temperature checks. This highlights that office openings may prove more complicated than anticipated.
“The challenges faced by both healthcare providers and employers during this pandemic have seemed insurmountable at times, but the battle surrounding personal health data and privacy is a challenge we must rise to,” said Russell P. Branzell, president and CEO of the College of Healthcare Information Management Executives.
“With safety and security top of mind for all, it is imperative that these organizations continue to take the necessary steps to fully protect this sensitive data from end to end, mitigating any looming cyberthreats while creating peace of mind for the individual.”
Beyond unwanted employer access to personal data, the survey found that nearly 60 percent of respondents expressed anxieties around their employer sharing personal health data externally to third parties such as insurance companies and employee benefit providers without consent.
A stark contrast to Accenture’s recent survey which found 62 percent of C-suite executives confirmed they were exploring new tools to collect employee data. A reminder to employers to tread lightly when mandating employee health protocols and questionnaires.
“COVID-19 has thrown many curveballs at both healthcare providers and employers, and the privacy and protection of critical patient and employee data must not be ignored,” said David Finn, executive VP of strategic innovation of CynergisTek.
“By getting ahead of the curve and implementing system-wide risk posture assessments and ensuring employee opt-in/opt-out functions when it comes to sharing personal data, these organizations can help limit these privacy and security risks.”
The ongoing debate surrounding privacy protection in the global data economy reached a fever pitch with July’s “Schrems II” ruling at the European Court of Justice, which struck down the Privacy Shield – a legal mechanism enabling companies to transfer personal data from the EU to the US for processing – potentially disrupting the business of thousands of companies.
The plaintiff, Austrian privacy advocate Max Schrems, claimed that US privacy legislation was insufficiently robust to prevent national security and intelligence authorities from acquiring – and misusing – Europeans’ personal data. The EU’s top court agreed, abolishing the Privacy Shield and requiring American companies that exchange data with European partners to comply with the standards set out by the GDPR, the EU’s data privacy law.
Following this landmark ruling, ensuring the secure flow of data from one jurisdiction to another will be a significant challenge, given the lack of an international regulatory framework for data transfers and emerging conflicts between competing data privacy regulations.
This comes at a time when the COVID-19 crisis has further underscored the urgent need for collaborative international research involving the exchange of personal data – in this case, sensitive health data.
Will data protection regulations stand in the way of this and other vital data sharing?
The Privacy Shield was a stopgap measure to facilitate data-sharing between the US and the EU which ultimately did not withstand legal scrutiny. Robust, compliant-by-design tools beyond contractual frameworks will be required in order to protect individual privacy while allowing data-driven research on regulated data and business collaboration across jurisdictions.
Fortunately, innovative privacy-enhancing technologies (PETs) can be the stable bridge connecting differing – and sometimes conflicting – privacy frameworks. Here’s why policy alone will not suffice to resolve existing data privacy challenges – and how PETs can deliver the best of both worlds:
A new paradigm for ethical and secure data sharing
The abolition of the Privacy Shield poses major challenges for over 5,000 American and European companies which previously relied on its existence and must now confront a murky legal landscape. While big players like Google and Zoom have the resources to update their compliance protocols and negotiate legal contracts between transfer partners, smaller innovators lack these means and may see their activities slowed or even permanently halted. Privacy legislation has already impeded vital cross-border research collaborations – one prominent example is the joint American-Finnish study regarding the genomic causes of diabetes, which “slowed to a crawl” due to regulations, according to the head of the US National Institutes of Health (NIH).
One response to the Schrems II ruling might be expediting moves towards a federal data privacy law in the US. But this would take time: in Europe, over two years passed between the adoption of GDPR and its enforcement. Given that smaller companies are facing an immediate legal threat to their regular operations, a federal privacy law might not come quickly enough.
Even if such legislation were to be approved in Washington, it is unlikely to be fully compatible with GDPR – not to mention widening privacy regulations in other countries. The CCPA, the major statewide data protection initiative, is generally considered less stringent than GDPR, meaning that even CCPA-compliant businesses would still have to adapt to European standards.
In short, the existing legislative toolbox is insufficient to protect the operations of thousands of businesses in the US and around the world, which is why it’s time for a new paradigm for privacy-preserving data sharing based on Privacy-Enhancing Technologies.
The advantages of privacy-enhancing technologies
Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards.
One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.
Jim Halpert, a data regulation lawyer who helped draft the CCPA and is Global Co-Chair of the Data Protection, Privacy and Security practice at DLA Piper, views certain solutions based on HE as effective compliance tools.
“Homomorphic Encryption encrypts data elements in such a way that they cannot identify, describe or in any way relate to a person or household. As a result, homomorphically encrypted data cannot be considered ‘personal information’ and is thus exempt from CCPA requirements,” Halpert says. “Companies which encrypt data through HE minimize the risk of legal threats, avoid CCPA obligations, and eliminate the possibility that a third party could mistakenly be exposed to personal data.”
The same principle applies to GDPR, which requires any personally identifiable information to be protected.
HE is applicable to any industry and activity which requires sensitive data to be analyzed by third parties; for example, research such as genomic investigations into individuals’ susceptibility to COVID-19 and other health conditions, and secure data analysis in the financial services industry, including financial crime investigations across borders and institutions. In these cases, HE enables users to legally collaborate across different jurisdictions and regulatory frameworks, maximizing data value while minimizing privacy and compliance risk.
PETs will be crucial in allowing data to flow securely even after the Privacy Shield has been lowered. The EU and the US have already entered negotiations aimed at replacing the Privacy Shield, but while a palliative solution might satisfy business interests in the short term, it won’t remedy the underlying problems inherent to competing privacy frameworks. Any replacement would face immediate legal challenges in a potential “Schrems III” case. Tech is in large part responsible for the growing data privacy quandary. The onus, then, is on tech itself to help facilitate the free flow of data without undermining data protection.
Ransomware has been noted by many as the most threatening cybersecurity risk for organizations, and it’s easy to see why: in 2019, more than 50 percent of all businesses were hit by a ransomware attack – costing an estimated $11.5 billion. In the last month alone, major consumer corporations, including Canon, Garmin, Konica Minolta and Carnival, have fallen victim to major ransomware attacks, resulting in the payment of millions of dollars in exchange for file access.
While there is a lot of discussion about preventing ransomware from affecting your business, the best practices for recovering from an attack are a little harder to pin down.
While the monetary amounts may be smaller for your organization, the importance of regaining access to the information is just as high. What steps should you take for effective ransomware recovery? A few of our best tips are below.
1. Infection detection
Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment.
Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further.
Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection.
One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. Thus, an AI/ML based approach is recommended, which will look for behaviors such as rapid, successive encryption of files and determine there is an attack happening.
Effective cybersecurity also includes good defensive mechanisms that protect business-critical systems like email. Often ransomware affects organizations by means of a phishing email attack or an email that has a dangerous file attached or hyperlinked.
If organizations are ill-equipped to handle dangerous emails, this can be an easy way for ransomware to make its way inside the walls of your organization’s on-premise environment or within the cloud SaaS environment. With cloud SaaS environments in particular, controlling third-party applications that have access to your cloud environment is extremely important.
2. Contain the damage
After you have detected an active infection, the ransomware process can be isolated and stopped from spreading further. If this is a cloud environment, these attacks often stem from a remote file sync or other process driven by a third-party application or browser plug-in running the ransomware encryption process. Digging in and isolating the source of the ransomware attack can contain the infection so that the damage to data is mitigated. To be effective, this process must be automated.
Many attacks happen after-hours when admins are not monitoring the environment and the reaction must be rapid to stop the spread of the virus. Security policy rules and scripts must be put in place as a part of proactive protection. Thus, when an infection is identified, the automation kicks in to stop the attack by removing the executable file or extension and isolate the infected files from the rest of the environment.
Another way organizations can help protect themselves and contain the damage should an attack occur is by purchasing cyber liability insurance. Cyber liability insurance is a specialty insurance line intended to protect businesses (and the individuals providing services from those businesses) from internet-based risks (like ransomware attacks) and risks related to information technology infrastructure, information privacy, information governance liability, and other related activities. In this type of attack situation, cyber liability insurance can help relieve some of the financial burden of restoring your data.
3. Restore affected data
In most cases, even if the ransomware attack is detected and contained quickly, there will still be a subset of data that needs to be restored. This requires having good backups of your data to pull back to production. Following the 3-2-1 backup best practice, it’s imperative to have your backup data in a separate environment from production.
The 3-2-1 backup rule consists of the following guidelines:
- Keep 3 copies of any important file, one primary and two backups
- Keep the file on 2 different media types
- Maintain 1 copy offsite
If your backups are of cloud SaaS environments, storing these “offsite” using a cloud-to-cloud backup vendor aligns with this best practice. This will significantly minimize the chance that your backup data is affected along with your production data.
The tried and true way to recover from a ransomware attack involves having good backups of your business-critical data. The importance of backups cannot be stressed enough when it comes to ransomware. Recovering from backup allows you to be in control of getting your business data back and not the attacker.
All too often, businesses may assume incorrectly that the cloud service provider has “magically protected” their data. While there are a few mechanisms in place from the cloud service provider side, ultimately, the data is your responsibility as part of the shared responsibility model of most CSPs. You can take a look at Microsoft’s stance on shared responsibility here.
4. Notify the authorities
Many of the major compliance regulations that most organizations fall under today, such as PCI-DSS, HIPAA, GDPR, and others, require that organizations notify regulatory agencies of the breach. Notification of the breach should be immediate and the FBI’s Internet Crime Complaint Center should be the first organization alerted. Local law enforcement should be informed next. If your organization is in a governed industry, there may be strict guidelines regarding who to inform and when.
5. Test your access
Once data has been restored, test access to the data and any affected business-critical systems to ensure the recovery of the data and services have been successful. This will allow any remaining issues to be remedied before turning the entire system back over to production.
If you’re experiencing slower than usual response times in the IT environment or larger-than-normal file sizes, it may be a sign that something sinister is still looming in the database or storage.
Ransomware prevention v. recovery
Sometimes the best offense is a good defense. When it comes to ransomware and regaining access to critical files, there are only two options. You either restore your data from backup if you were forward-thinking enough to have such a system in place, or you have to pay the ransom. Beyond the obvious financial implications of acquiescing to the hacker’s demands, paying is risky because there is no way to ensure they will actually provide access to your files after the money is transferred.
There is no code of conduct or contract when negotiating with a criminal. A recent report found that some 42 percent of organizations who paid a ransom did not get their files decrypted.
Given the rising number of ransomware attacks targeting businesses, the consequences of not having a secure backup and detection system in place could be catastrophic to your business. Investing in a solution now helps ensure you won’t make a large donation to a nefarious organization later. Learning from the mistakes of other organizations can help protect yours from a similar fate.
33% of companies within the digital supply chain expose common network services such as data storage, remote access and network administration to the internet, according to RiskRecon. In addition, organizations that expose unsafe services to the internet also exhibit more critical security findings.
The research is based on an assessment of millions of internet-facing systems across approximately 40,000 commercial and public institutions. The data was analyzed in two strategic ways: the direct proportion of internet-facing hosts running unsafe services, as well as the percentage of companies that expose unsafe services somewhere across their infrastructure.
The research concludes that the impact is further heightened when vendors and business partners run unsafe, exposed services used by their digital supply chain customers.
“Blocking internet access to unsafe network services is one of the most basic security hygiene practices. The fact that one-third of companies in the digital supply chain are failing at one of the most basic cybersecurity practices should serve as a wake up call to executives third-party risk management teams,” said Kelly White, CEO, RiskRecon.
“We have a long way to go in hardening the infrastructure that we all depend on to safely operate our businesses and protect consumer data. Risk managers will be well served to leverage objective data to better understand and act on their third-party risk.”
Expose unsafe network services: Key findings
- 33% of organizations expose one or more unsafe services across hosts under their control. As such, admins should either eliminate direct internet access or deploy compensating controls for when/if such services are required.
- Direct internet access to database services should be prohibited or secured. Within the top three unsafe network services, datastores, such as S3 buckets and MySQL databases are the most commonly exposed.
- Digital transformation and the shift to remote work needs to be considered. Remote access is the second most commonly exposed service; admins should consider restricting the accessibility of these services only to authorized and internal users.
- Universities are woefully exposed. With a culture that boasts open access to information and collaboration, the education sector has the greatest tendency to expose unsafe network services on non-student systems, with 51.9% of universities running unsafe services.
- Global regions lack proper security posture. Countries such as the Ukraine, Indonesia, Bulgaria, Mexico and Poland confirm the highest rate of domestically-hosted systems running unsafe services.
- Beware of ElasticSearch and MongoDB. Firms that expose these services to the internet have a 4x to 5x higher rate of severe security findings than those who do not run on internet-facing hosts.
- Unsafe services uncover other security issues. Failing to patch software and implement web encryption are two of the most prevalent security findings associated with unsafe services.
“This research should be welcome news to organizations struggling under the pressure to conduct exhaustive and time-consuming security assessments of their external business partners,” said Jay Jacobs, partner, Cyentia Institute.
“Similar to how medical doctors diagnose illnesses through various outward signs exhibited by their patients, third-party risk programs can perform quick, reliable diagnostics to identify underlying cybersecurity ailments.
“Not only is the presence of unsafe network services a problem in itself, but the data we examine in this report also shows that they’re a symptom of broader problems. Easy, reliable risk like this offer a rare quick win for risk assessments.”
Researchers discovered a malicious functionality within the iOS MintegralAdSDK (aka SourMint), distributed by Chinese company Mintegral.
Functional flow of a user ad-click being hijacked by the Mintegral SDK
Major privacy concerns
According to Snyk, SourMint actively performed ad fraud on hundreds of iOS apps and brought with it major privacy concerns to hundreds of millions of consumers.
On the surface, the MintegralAdSDK posed as a legitimate advertising SDK for iOS app developers, but its malicious code appeared to commit ad attribution fraud by secretly accessing link clicking activity within thousands of iOS apps that use the SDK.
SourMint also spied on user link click activity, improperly tracking requests performed by the app and reporting it back to Mintegral’s servers. Snyk’s researchers exposed SourMint and responsibly disclosed the information to Apple, alerting them to the active supply chain attack.
The SDK was distributed through Mintegral’s GitHub Repository, Cocoapods Package Manager for iOS; and Gradle/Maven for Android (which does not appear to be malicious). Unbeknownst to developers integrating it into their applications, the iOS versions of the SDK were malicious.
The SDK remained undetected for more than a year within the Apple App Store; SourMint first appeared in the 5.51 version of iOS in July 2019 continuing through version 18.104.22.168. Since then it has been identified in 1,200 iOS apps, including approximately 70 of the top 500 free apps found on the App Store, some of which are in the top 100.
Malicious iOS SDK functionality
Researchers found that SourMint has two major malicious functionalities in the SDK:
- Compromising app user privacy SourMint monitored and tracked when users clicked on links, spying on individual link activity by hooking onto the communication functions the iOS app user deployed. The SDK inserted itself via method swizzling into several functions responsible for opening resources in response to the user clicking on a link once it was installed. This allowed Mintegral to track all URLs accessed by the user and report the data back to Mintegral’s servers. This has impacted millions of consumers to date.
- Advertising attribution fraud SourMint was hijacking competing ad networks and consumers by manipulating click notifications used in attribution for app installs that were not actually generated by the Mintegral advertising platform. This process tricked attribution platforms to associate an install created by another source to Mintegral – manipulating the ‘last click attribution model’ commonly applied by attribution providers. This likely impacted the business of other advertisers and developers by taking away value that should have been attributed to them.
“As the first malicious SDK of this kind to infiltrate the iOS ecosystem, SourMint was very sophisticated. It avoided detection for so long by utilizing various obfuscations and anti-debugging tricks,” said Danny Grander, CSO, Snyk. “Developers were unaware of the malicious package upon deploying the application, allowing it to proliferate for more than a year. As cyber risk continues to ramp up, it’s critical for all software developers to mitigate the potential of malicious code making it into production and creating consumer privacy risk at this scale.”
Where there’s money, there’s also an opportunity for fraudulent actors to leverage security flaws and weak entry-points to access sensitive, personal consumer information.
This has caused a sizeable percentage of consumers to avoid adopting mobile banking completely and has become an issue for financial institutions who must figure out how to provide a full range of financial services through the mobile channel in a safe and secure way. However, with indisputable demand for a mobile-first experience, the pressure to adapt has become unavoidable.
In order to offer that seamless, omnichannel experience consumers crave, financial institutions have to understand the malicious actors and fraudulent tactics they are up against. Here are a few that have to be on the mobile banking channel’s radar.
1. Increased device usage sparks surge in mobile malware
Banking malware has become a very common mobile threat, even more so now as fraudsters leverage fear and uncertainty surrounding the global pandemic. According to a recent report by Malwarebytes, mobile banking malware has surged over recent months, focused on stealing personal information and using weakened remote connections and mobile devices in a work-from-home environment to gain access to more valuable corporate networks.
The financial burden of a data breach resulting from mobile malware could potentially set organizations back millions of dollars, as well as do some serious damage to customer trust and loyalty.
2. Sacrificing software quality and security by effecting premature product rollouts
Securing mobile is a laborious task that requires mobile app developers to factor in several entities, including device manufacturers, mobile operating system developers, app developers, mobile carriers, and service providers. No platform nor device can be secured in the same way, meaning developers are constantly having to overcome a unique set of challenges in order to reduce the risk of fraudulent activity.
The reality of such a complex ecosystem is that mobile app developers are not always qualified to understand all the risks at play, which leads to unsecured mobile data, connections, and transactions. Additionally, the speed at which the market moves thanks to emerging technologies and innovations creates an added layer of pressure for developers. Lacking the resources and time to properly protect consumers can lead to high-profile attacks where sensitive data is exploited.
3. Vulnerabilities in digital security protocols
At any given time, every entity in the ecosystem described above must have high confidence in the entity on the other side of the transaction to ensure its legitimacy. A lack of digital security protocols like secure sockets layer (SSL) and transport layer security (TLS) in mobile banking apps makes it difficult to establish encrypted links between every entity that ultimately help prevent phishing and man-in-the-middle attacks.
If we continue growing our ecosystem at the current rate, adding to its complexity and connecting more and more third-party services and networks, we can no longer avoid fixing the broken system we have for SSL certificate validation.
4. Unreliable mobile device identification
Another issue at play is device identification. The only way other entities in the ecosystem can recognize a unique device is through device fingerprinting. This is a process through which certain unique attributes of a device – operating system, type and version of web browser, the device’s IP address, etc. – are combined for identification. This information can then be pulled from a database for future fraud prevention purposes and a range of other use-cases.
Data privacy concerns and limited data sharing on devices, however, have weakened the process and reliability of identification. If we do not have enough discrete data points to establish a reliable digital fingerprint, the whole system becomes ineffective.
5. Time to update authentication techniques
Fraudsters are always on the lookout for ways to intercept confidential login information that grants them access to protected accounts. Two-factor authentication (2FA) has become banks’ preferred security method for reliably authenticating users trying to access the mobile channel and staying ahead of cybercriminals.
More often than not, 2FA relies on one-time-passwords (OTPs) delivered by SMS to the account holder upon attempted login. Unfortunately, with phishing – especially via SMS – on the rise, hackers can gain access to a mobile device and OTPs delivered via SMS, and gain access to accounts and authenticate fraudulent transactions.
There are also a number of other tactics – e.g., SIM-swapping – attackers use to gain access to sensitive information and accounts.
6. Lack of industry regulation and standards
Without the establishment of rigorous standards and guidance on online banking security and protecting the end-user, low consumer trust will inhibit mass market acceptance. The Federal Financial Institutions Examination Council (FFIEC) has yet to issue ample guidance on the topic of authentication and identification on mobile devices. Mobile security standards need to be a top priority for regulators, especially as new technologies and mobile malware continue to disrupt the market.
The underlying theme for banks to keep in mind is that trust is a currency they cannot afford to lose in such a competitive financial services market. In the race to provide seamless, omnichannel banking experiences, integrating better security protocols without compromising usability can feel like a constant balancing act. Researching the latest tools and technology as well as building trusted partner relationships with third-party service providers is the only way banks can differentiate themselves in a dynamic security landscape.
A number of organizations face shortcomings in monitoring and securing their cloud environments, according to a Tripwire survey of 310 security professionals.
76% of security professionals state they have difficulty maintaining security configurations in the cloud, and 37% said their risk management capabilities in the cloud are worse compared with other parts of their environment. 93% are concerned about human error accidentally exposing their cloud data.
Few orgs assessing overall cloud security posture in real time
Attackers are known to run automated searches to find sensitive data exposed in the cloud, making it critical for organizations to monitor their cloud security posture on a recurring basis and fix issues immediately.
However, the report found that only 21% of organizations assess their overall cloud security posture in real time or near real time. While 21% said they conduct weekly evaluations, 58% do so only monthly or less frequently. Despite widespread worry about human errors, 22% still assess their cloud security posture manually.
“Security teams are dealing with much more complex environments, and it can be extremely difficult to stay on top of the growing cloud footprint without having the right strategy and resources in place,” said Tim Erlin, VP of product management and strategy at Tripwire.
“Fortunately, there are well-established frameworks, such as CIS benchmarks, which provide prioritized recommendations for securing the cloud. However, the ongoing work of maintaining proper security controls often goes undone or puts too much strain on resources, leading to human error.”
Utilizing a framework to secure the cloud
Most organizations utilize a framework for securing their cloud environments – CIS and NIST being two of the most popular – but only 22% said they are able to maintain continuous cloud security compliance over time.
While 91% of organizations have implemented some level of automated enforcement in the cloud, 92% still want to increase their level of automated enforcement.
Additional survey findings show that automation levels varied across cloud security best practices:
- Only 51% have automated solutions that ensure proper encryption settings are enabled for databases or storage buckets.
- 45% automatically assess new cloud assets as they are added to the environment.
- 51% have automated alerts with context for suspicious behavior.
Maximizing data privacy should be on every organization’s priority list. We all know how important it is to keep data and applications secure, but what happens when access to private data is needed to save lives? Should privacy be sacrificed? Does it need to be?
Consider the case of contact tracing, which has become a key tool in the fight to control COVID-19. It’s a daunting task greatly facilitated by collecting and analyzing real-time identity and geo-location data gathered from mobile devices—sometimes voluntarily and sometimes not.
In most societies, such as the United States and the European Union, the use of location and proximity data by governments may be strictly regulated or even forbidden—implicitly impeding the ability to efficiently contain the spread of the virus. Where public health has been prioritized over data privacy, the use of automated tracing has contributed to the ability to quickly identify carriers and prevent disease spread. However, data overexposure remains a major concern for those using the application. They worry about the real threat that their sensitive location data may eventually be misused by bad actors, IT insiders, or governments.
What if it were possible to access the data needed to get contact tracing answers without actually exposing personal data to anyone anywhere? What if data and applications could be secure by default—so that data could be collected, stored, and results delivered without exposing the actual data to anyone except the people involved?
Unfortunately, current systems and software will never deliver the absolute level of data privacy required because of a fundamental hardware flaw: data cannot be simultaneously used and secured. Once data is put into memory, it must be decrypted and exposed to be processed. This means that once a bad actor or malicious insider gains access to a system, it’s fairly simple for that system’s memory and/or storage to be read, effectively exposing all data. It’s this data security flaw that’s at the foundation of virtually every data breach.
Academic and industry experts, including my co-founder Dr. Yan Michalevsky, have known for years that the ultimate, albeit theoretical, resolution of this flaw was to create a compute environment rooted in secure hardware. These solutions have already been implemented in cell phones and some laptops to secure storage and payments and they are working, well proving the concept works as expected.
It wasn’t until 2015 that Intel introduced Software Guard Extensions (SGX)—a set of security-related machine-level instruction codes built into their new CPUs. AMD has also added a similar proprietary instruction set called SEV technology into their CPUs. These new and proprietary silicon-level command sets enable the creation of encrypted and isolated parts of memory, and they establish a hardware root of trust that helps close the data security flaw. Such isolated and secured segments of memory are known as secure enclaves or, more generically, Trusted Execution Environments (TEEs).
A broad consortium of cloud and software vendors (called the Confidential Computing Consortium) is working to develop these hardware-level technologies by creating the tools and cloud ecosystems over which enclave-secured applications and data can run. Amazon Web Services announced its version of secure enclave technology, Nitro Enclaves, in late 2019. Most recently, both Microsoft (Azure confidential computing) and Google announced their support for secure enclaves as well.
These enclave technologies and secure clouds should enable applications, such as COVID-19 contact tracing, to be implemented without sacrificing user privacy. The data and application enclaves created using this technology enable sensitive data to be processed without ever exposing either the data or the computed results to anyone but the actual end user. This means public health organizations can have automated contact tracing that can identify, analyze, and provide needed alerts in real-time—while simultaneously maximizing data privacy.
Creating or shifting applications and data to the secure confines of an enclave can take a significant investment of time, knowledge, and tools. That’s changing quickly. New technologies are becoming available that will streamline the operation of moving existing applications and all data into secure enclaves without modification.
As this happens, all organizations will be able to secure all data by default. This will enable CISOs, security professionals—and public health officials—to sleep soundly, knowing that private data and applications in their care will be kept truly safe and secure.
Instead of relying on customers to protect their vulnerable smart home devices from being used in cyberattacks, Ben-Gurion University of the Negev (BGU) and National University of Singapore (NUS) researchers have developed a new method that enables telecommunications and internet service providers to monitor these devices.
An overview of the key steps in the proposed method
According to their new study, the ability to launch massive DDoS attacks via a botnet of compromised devices is an exponentially growing risk in the Internet of Things (IoT). Such attacks, possibly emerging from IoT devices in home networks, impact the attack target, as well as the infrastructure of telcos.
“Most home users don’t have the awareness, knowledge, or means to prevent or handle ongoing attacks,” says Yair Meidan, a Ph.D. candidate at BGU. “As a result, the burden falls on the telcos to handle. Our method addresses a challenging real-world problem that has already caused challenging attacks in Germany and Singapore, and poses a risk to telco infrastructure and their customers worldwide.”
Each connected device has a unique IP address. However, home networks typically use gateway routers with NAT functionality, which replaces the local source IP address of each outbound data packet with the household router’s public IP address. Consequently, detecting connected IoT devices from outside the home network is a challenging task.
The researchers developed a method to detect connected, vulnerable IoT models before they are compromised by monitoring the data traffic from each smart home device. This enables telcos to verify whether specific IoT models, known to be vulnerable to exploitation by malware for cyberattacks are connected to the home network. It helps telcos identify potential threats to their networks and take preventive actions quickly.
By using the proposed method, a telco can detect vulnerable IoT devices connected behind a NAT, and use this information to take action. In the case of a potential DDoS attack, this method would enable the telco to take steps to spare the company and its customers harm in advance, such as offloading the large volume of traffic generated by an abundance of infected domestic IoT devices. In turn, this could prevent the combined traffic surge from hitting the telco’s infrastructure, reduce the likelihood of service disruption, and ensure continued service availability.
“Unlike some past studies that evaluated their methods using partial, questionable, or completely unlabeled datasets, or just one type of device, our data is versatile and explicitly labeled with the device model,” Meidan says. “We are sharing our experimental data with the scientific community as a novel benchmark to promote future reproducible research in this domain.” This dataset is available here.
This research is a first step toward dramatically mitigating the risk posed to telcos’ infrastructure by domestic NAT IoT devices. In the future, the researchers seek to further validate the scalability of the method, using additional IoT devices that represent an even broader range of IoT models, types and manufacturers.
“Although our method is designed to detect vulnerable IoT devices before they are exploited, we plan to evaluate the resilience of our method to adversarial attacks in future research,” Meidan says. “Similarly, a spoofing attack, in which an infected device performs many dummy requests to IP addresses and ports that are different from the default ones, could result in missed detection.”
As consumers’ concerns about their digital privacy continue to grow and who is responsible for guarding it remains unclear, new research conducted by Ponemon Institute reveals a lack of empowerment consumers feel when it comes to their data privacy.
Address privacy risks
The research points to a privacy gap between the consumer data protection individuals want and what industry and regulators provide. While the majority of consumers want their data protected, they’re still waiting on — or expecting – the federal government or industries to provide this protection.
For instance, 60% of consumers believe government regulation should help address the privacy risks facing consumers today, of which 34% say government regulation is needed to protect personal privacy and 26% believe a hybrid option (regulation and self-regulation) should be pursued.
“This research revealed much of the tension surrounding digital privacy today. Based on my polling experience, these findings make a compelling case for the important role identity protection products and services play in protecting consumers’ privacy. The study shows that many consumers are alarmed by the uptick in privacy scandals and want to protect their information, but don’t know how to and feel like they lack the right tools to do so,” said Dr. Larry Ponemon, chairman of Ponemon Institute.
Interestingly, the study found that 64% of consumers say they think it is “creepy” when they receive online ads that are relevant to them, but not based on their online search behavior or publicly available information. This confirms that many consumers experience this phenomenon and are alarmed by it. In addition, 73% of consumers say advertisers should allow them to “opt-out” of receiving ads on any specific topic at any time.
This research also reveals a lack of empowerment that consumers feel in their ability to protect their privacy. While 74% of consumers say they have no control over the personal information that is collected on them, they are not taking action to limit the data they provide when using online services. In fact, 54% of consumers say they do not consciously limit what personal data they are providing. This lack of empowerment can have devastating effects on consumers’ privacy if it goes unchecked.
Other key findings
Consumer concern is increasing: 68% of consumers are more concerned about the privacy and security of their personal information than they were three years ago. Three-fourths of consumers (75%) in the over 55 age group have become more concerned about their privacy over the past three years.
Search engines least trusted: 92% of consumers believe search engines are sharing and selling their private data, 78% believe social media platforms are and 63% of consumers think shopping sites are as well. Similarly, 86% of respondents say they are very concerned when using Facebook and Google and 66% of respondents say they are very concerned when shopping online or using online services.
Seniors against advertising tracking: 78% of older consumers say advertisers should not be able to serve ads based on their conversations and messaging.
Consumers have little hope in websites’ ad blocking: Only 33% of consumers expect websites to have an ad blocker that stops tracking and only 17% of consumers say they expect websites to limit the collection and sharing of personal information.
Split responsibility: 54% of consumers say online service providers should be accountable for protecting the privacy of consumers, while 45% say they themselves should assume responsibility.
How consumers protect themselves: 65% of consumers are using some type of privacy protection provided by their devices. Of these, 25% are setting a more restrictive data sharing setting, 21% are using both additional authentication controls and a more restrictive data sharing setting and 19% are using additional authentication controls.
Half of consumers are aware of the availability of protections: Of the protections available to consumers to protect their personal information, 52% say opting out of data collection and 48% say data sharing and encryption of personal information are available, respectively.
Only 10% of organizations are using data effectively for transformational purposes, according to NTT DATA Services.
While 79% of organizations recognize the strategic value of data, the study concludes their efforts to use it are hindered by significant challenges including siloed islands of data across the organization and lack of data skills and talent.
The study analyzes the critical role of data and analytics in helping businesses and organizations pivot from disruption to transformation, an imperative as they respond to today’s global economic climate.
Organizations starting to prioritize a data-driven culture
The study shows only 37% are very effective at using data to adopt or invent a new business model, and only 31% are using data to enter new markets. These different use cases show that organizations have started prioritizing a data-driven culture, but many are still lagging in the most basic aspects of data management and governance.
“Our study reinforces that organizations who act quickly and decisively on their data strategies – or Data Leaders – will recover from the global crisis better and even accelerate their success,” said Greg Betz, Senior Vice President, Data Intelligence and Automation, NTT DATA Services.
“C-suite executives must be champions for the vital role strong data governance plays in resolving systemic process failures and transitioning to new business models in response to the crisis.
“To rebound effectively, corporations, organizations and government agencies must shift to next-generation technologies and create contactless experiences, increased security, and scalable hybrid infrastructures – all reinforced by quality, integrated data.”
Data crisis: Organizations struggle to use data for transformation
The financial services (FS) sector accounts for 25% of the data leaders, making this the sector with the most data leaders. The survey shows that 59% FS organizations report being aware of and fully prepared for new data regulations.
34% report data is shared seamlessly across the enterprise; however, they are the least likely to report they have clear data security processes in place.
The manufacturing sector boasts the second-highest number of data leaders in the study. More than eight out of 10 respondents say they can act swiftly if there is a data privacy breach; however, as with other sectors, when they attempt to derive value from their data, manufacturers struggle with data silos (24%), and they lack the necessary skills and talent to analyze their data (19%).
Among healthcare respondents, 60% say they’re aware and fully prepared for new and upcoming regulations, and approximately eight out of 10 say they’re confident they can comply with data privacy regulations.
However, this sector ranks first in its lack of data literacy skills — about a fifth of respondents report they don’t understand how to read, create and communicate data as information.
Lack of data talent and skills in the public sector
The public sector has the highest number of data laggards at 37%. Like other sectors, lack of data talent and skills is one of the public sector’s biggest barriers when attempting to understand and derive value from data.
Insurance companies are among the most likely to report they’re aware and fully prepared for new data regulations (58%) and have clear processes in place for securely using their data (50%).
However, when it comes to deriving value from data, insurance companies – like manufacturing, struggle with data silos and the lack of the right technologies to analyze their data.
“This study validates that many of the top data challenges organizations face today are decades old,” said Theresa Kushner, Consultant, AI and Analytics, NTT DATA Services. “The 2020 pandemic is a wakeup call for businesses at any scale, and a reminder that in today’s global economic climate the time to address data challenges and chart a new path is now.”
Digital privacy is paramount to the global community, but it must be balanced against the proliferation of digital-first crimes, including child sexual abuse, human trafficking, hate crimes, government suppression, and identity theft. The more the world connects with each other, the greater the tension between maintaining privacy and protecting those who could be victimized.
Global digital privacy
Online communication can connect and enrich people’s lives, but it is also being leveraged for malicious purposes. Bad actors can now reach a broader audience of potential victims, coordinate with others, share the most effective practices, and expand their illegal activities while being protected by a shield of online anonymity. The ability to scale harmful activities is as efficient as scaling community-building practices. The Internet has provided an environment for predators to thrive.
The challenge is to respect the rights of individuals while still allowing systematic controls to protect, dissuade and, when necessary, investigate for prosecution those who are purposefully undermining the safety of global citizens. Just as in the physical world, law enforcement is tasked with protecting people from criminals.
They require the ability to investigate crimes in a timely manner and identify suspects for prosecution. The right to privacy and the risk of being victimized are in conflict. Users, companies, and governments are intertwined and struggling to effectively understand and deal with legacy and evolving threats.
As this landscape is evolving, we wanted to start the conversation on what is the right balance of privacy and safety online.
A zero-sum game of privacy and safety
Currently, there is a perception of a zero-sum game for privacy and safety in the digital world. Expectations, regulations, and enforcement are fragmented, confusing, and inadequate. In 2009, the Child Online Protection Act (COPA) was overturned by the Supreme Court, finding that it violated first amendment rights.
The practical implications of this legislative change, coupled with Section 230 of the Communications Decency Act (CDA) of 1996, which holds that platforms are not responsible for what third-party publishers post on them, is that children are no longer protected from adult content by websites – the responsibility was transferred to their parents.
The Children’s Online Privacy Protection Act of 1998 (COPPA) is the current law that protects child data privacy online. It mandates that any company that has users under that age of 13 on their platforms must prove that the parents gave their permission (often accomplished by entering credit card information to prove identity) and can’t retain data from children under 13.
Many platforms avoid addressing these restrictions by stating no one under the age of 13 is allowed on their platforms, but they do not have practices in place for proof of identity to enforce them in a meaningful way. They usually use a “check the box if you are over 13“ honor system, so many children online end up lacking the privacy or safety protections that COPPA was meant to provide them.
Parents who are raising this generation of digital natives are digital immigrants themselves. They were young enough to adjust to the trends of social, mobile, and cloud; but they were mostly in their 20s when they gained access to it. This has left a significant knowledge gap in what cyberbullying, grooming, and sextortion tweens and teens experience.
This generation of teens is exhibiting the highest rates of mental health issues and suicides we have seen to date. This teen suicide trend is even more alarming when you factor in that, according to the Center for Disease Control, deaths from youth suicide are only part of the problem, because more young people survive suicide attempts than actually die.
Contributing author: Matthew Rosenquist, CISO, Eclipz.io.
A global research report by Lenovo highlights the triumphs, challenges and the consequences of the sudden shift to work-from-home (WFH) during the COVID-19 pandemic and how companies and their IT departments can power the new era of working remotely that will follow.
The study looks at how employees worldwide are responding to the “new normal” after 72 percent of those surveyed confirmed a shift in their daily work dynamic in the last three months. Employees feel more connected and more productive than ever before as they WFH, but the data shows financial, physical and emotional downsides for the global workforce.
“This data gave us valuable insights on the complex relationship employees have with technology as work and personal are becoming more intertwined with the increase in working from home,” commented Dilip Bhatia, Vice President of Global User and Customer Experience at Lenovo.
“Respondents globally feel more reliant on their work computers and more productive but have concerns about data security and want their companies to invest in more tech training. We’re using these takeaways to improve the development of our smart technology and better empower remote workers of tomorrow.”
Productivity, connectivity, and IT independence increase
Survey respondents around the world are embracing working away from the office – yet feel more connected to their devices than ever as the ‘office’ becomes wherever their technology is.
- Eighty-five percent of those surveyed feel more reliant on their work PCs (laptops and/or desktop computer) than they did working from the office.
- 63 percent of the global workforce surveyed feel they are more productive working from home than when they were in the office.
- Fifty-two percent of respondents believe they will continue to WFH more than they did pre-COVID-19 – even after social distancing measures lift.
This new confidence in working remotely has increased organizations’ need for customizable, modern IT solutions to be deployed at scale. Seventy-nine percent of participants agree that they have had to be their own IT person while working from home, and a majority of those surveyed believe employers should invest in more tech training to power WFH in the future.
WFH during the pandemic: Productivity can come with downsides
In such a quick, dramatic shift to WFH that the pandemic brought on, workers say they have had to make personal investments on tech when their employers have not.
- Seven-in-ten employees surveyed globally said they purchased new technology to navigate working remotely
- Nearly 40 percent of those surveyed have had to partially or fully fund their own tech upgrades
- US respondents say they have personally spent an average of $348 to upgrade or improve technology while working at home due to COVID-19 – roughly $70 higher than the global average ($273), and the second-highest among 10 markets surveyed
New ways of working have also brought on a set of literal aches and pains. Seventy-one percent of workers surveyed complain of new or worsening conditions, including headaches, back and neck pains, difficulty sleeping and more.
Having a proper WFH setup is important to minimizing discomfort, including proper furniture and a larger-sized external monitor that can ergonomically adjust to natural eye-level.
Making time for breaks is also important since many built-in workday breaks for office workers (stretching, getting up to get coffee, going out for lunch, etc.) occur in different rhythms while working remotely.
Along with physical ailments, workers around the world identified other top challenges to the WFH experience: reduced personal connections with coworkers, an inability to separate work life from home life, and finding it hard to concentrate during work hours due to distractions at home.
Training and implementation of high-quality video conferencing capabilities such as noise-cancelling headphones and webcams on the work PC, tablet or phone can help employees feel more connected with colleagues and feeling less distracted at home.
Naturally as technology has powered WFH around the world, surveyed workers also expressed overall concerns overall around security and being heavily reliant on tech connectivity to get the job done.
Employees of all ages agree their top tech-specific concern is how it makes their companies more vulnerable to data breaches. As a result, enhanced security will need to be built into employees’ hardware, software and services (including deployment, set-up and maintenance) from the get-go and is especially critical within today’s remote work environment.
The study also offers important guidance to employers around the world to embrace the new technology normal beyond the pandemic and into the future.
Flexibility isn’t just expected, it’s required
Overall, surveyed employees globally expressed mixed feelings about work in a post-COVID world – while some employees expressed being happy (27 percent) and excited (21 percent) about working from home forever, others feel neutral (22 percent) and conflicted (17 percent).
In light of this, it is more important than ever to give employees flexibility and the required tech to WFH so they don’t have to spend their own money on tech upgrades for work.
Tech should facilitate balance, collaboration, multi-tasking
Although most respondents say tech makes them efficient and more productive, employees identified other ways that tech could improve to help them gain an advantage at work:
- Help them better maintain work life balance
- Make it easier for employees to collaborate with others at outside companies and organizations
- Assist with multi-tasking and switching gears between projects more frequently
- Automate some of their daily tasks
More 5G, please!
Although emerging technologies may have been a new subject in the past, employees are now expressing excitement about the role it plays in improving the WFH experience.
When asked which emerging technologies would have the most positive impact on their job within the next few years, employees ranked 5G wireless network technology and AI/ML as their top choices.
When implementing these technologies, companies should seek employee input on where these can make the most impact within their jobs. 5G provides a strong and more secure connection while giving employees the ability to move around, while AI can help automate routine responsibilities.
A majority of employees have also expressed they are hopeful that emerging technologies can help improve work/life balance.
The global pandemic has seen the web take center stage. Banking, retail and other industries have seen large spikes in web traffic, and this trend is expected to become permanent.
Global brands fail to implement security controls
As attackers ramp up efforts to exploit this crisis, a slew of high-profile attacks on global brands and record-breaking fines for GDPR breaches have had little impact on client-side security and data protection deployments.
In many cases, this data leakage is taking place via whitelisted, legitimate applications, without the website owner’s knowledge. What this report indicates is that data risk is everywhere and effective controls are rarely applied.
Key findings highlight the scale of vulnerability and that the majority of global brands fail to deploy adequate security controls to guard against client-side attacks.
This website supply chain leverages client-side connections that operate outside the span of effective control in 98% of sampled websites. The client-side is a primary attack vector for website attacks today.
Websites expose data to an average of 17 domains
Despite increasing numbers of high-profile breaches, forms, found on 92% of websites expose data to an average of 17 domains. This is PII, credentials, card transactions, and medical records.
While most users would reasonably expect this data to be accessible to the website owner’s servers and perhaps a payment clearing house, the analysis shows that this data is exposed to nearly 10X more domains than intended.
Nearly one-third of websites studied expose data to more than 20 domains. This provides some insight into how and why attacks like Magecart, formjacking and card skimming continue largely unabated.
No attack is more widespread than XSS
Standards-based security controls exist that can prevent these attacks. They are infrequently applied.
Unfortunately, despite high-profile risks and the availability of controls, there has been no significant increase in the adoption of security capable of preventing client-side attacks:
- Over 99% of websites are at risk from trusted, whitelisted domains like Google Analytics. These can be leveraged to exfiltrate data, underscoring the need for continuous PII leakage monitoring and prevention. This has significant implications for data privacy, and by extension, GDPR and CCPA.
- 30% of the websites analyzed had implemented security policies – an encouraging 10% increase over 2019. However…
- Only 1.1% of websites were found to have effective security in place – an 11% decline from 2019. It indicates that while deployment volume went up, effectiveness declined more steeply. The attackers have the upper hand largely because we are not playing effective defense.