Researchers at the University of Birmingham have managed to break Intel SGX, a set of security functions used by Intel processors, by creating a $30 device to control CPU voltage.
Break Intel SGX
The work follows a 2019 project, in which an international team of researchers demonstrated how to break Intel’s security guarantees using software undervolting. This attack, called Plundervolt, used undervolting to induce faults and recover secrets from Intel’s secure enclaves.
Intel fixed this vulnerability in late 2019 by removing the ability to undervolt from software with microcode and BIOS updates.
Taking advantage of a separate voltage regulator chip
But now, a team in the University’s School of Computer Science has created a $30 device, called VoltPillager, to control the CPU’s voltage – thus side-stepping Intel’s fix. The attack requires physical access to the computer hardware – which is a relevant threat for SGX enclaves that are often assumed to protect against a malicious cloud operator.
The bill of materials for building VoltPillager is:
- Teensy 4.0 Development Board: $22
- Bus Driver/ Buffer * 2: $1
- SOT IC Adapter * 2: $13 for 6
How to build Voltpillager Board
This research takes advantage of the fact that there is a separate voltage regulator chip to control the CPU voltage. VoltPillager connects to this unprotected interface and precisely controls the voltage. The research show that this hardware undervolting can achieve the same (and more) as Plundervolt.
Zitai Chen, a PhD student in Computer Security at the University of Birmingham, says: “This weakness allows an attacker, if they have control of the hardware, to breach SGX security. Perhaps it might now be time to rethink the threat model of SGX. Can it really protect against malicious insiders or cloud providers?”
The European Union Agency for Cybersecurity (ENISA) released its Guidelines for Securing the IoT, which covers the entire IoT supply chain – hardware, software and services.
Supply chains are currently facing a broad range of threats, from physical threats to cybersecurity threats. Organisations are becoming more dependent than ever before on third parties.
As organisations cannot always control the security measures of their supply chain partners, IoT supply chains have become a weak link for cybersecurity. Today, organisations have less visibility and understanding of how the technology they acquire is developed, integrated and deployed than ever before.
“Securing the supply chain of ICT products and services should be a prerequisite for their further adoption particularly for critical infrastructure and services. Only then can we reap the benefits associated with their widespread deployment, as it happens with IoT,” said Juhan Lepassaar, Executive Director, ENISA.
In the context of the development of the guidelines, ENISA has conducted a survey that identifies the existence of untrusted third-party components and vendors, and the vulnerability management of third-party components as the two main threats to the IoT supply chain. The publication analyses the different stages of the development process, explores the most important security considerations, identifies good practices to be taken into account at each stage, and offers readers additional resources from other initiatives, standards and guidelines.
As in most cases pre-prepared products are used to build up an IoT solution, introducing the concept of security by design and security by default is a fundamental building block to protect this emerging technology. The agency has worked with IoT experts to create specific security guidelines for the whole lifespan of IoT devices.
These guidelines to help tackle the complexity of IoT focus on bringing together the key actors in the supply chain to adopt a comprehensive approach to security, leverage existing standards and implement security by design principles.
The majority of UK businesses using Oracle E-Business Suite (EBS) are running on old versions of the business critical ERP system, according to a Claremont study.
Of the 154 IT professionals polled, 64% revealed they are running on an earlier version that the current R12.2. With Oracle cutting off premier support to EBS 12.1 in December 2021, this leaves these businesses facing potential legislative and security issues if they fail to upgrade prior to the deadline.
58% of the businesses polled claimed they did intend on making the upgrade to R12.2.
“Businesses intent on upgrading to EBS R12.2 face a race against the clock in order to get it done in time. There is now just 14 months until the deadline, and while that may seem like a long time, given that the survey indicates almost two-thirds of businesses are currently looking to upgrade, there is likely to be resource scarcity in the marketplace. With upgrades taking 6-12 months to complete, vendor selections to be made and business cases to be raised, now is the time to act,” said Mark Vivian, CEO at Claremont.
The study also revealed that the majority of EBS users are currently hosting EBS on physical servers. 69% said they were still using physical servers, compared to just 31% hosting EBS on a cloud platform. 60% of businesses claimed they had no intention of migrating to the cloud, while 26% said they were planning a migration, and just 14% said their migration was underway.
The survey also revealed the reasons why those businesses using cloud platforms to host EBS had chosen their cloud provider. 53% of businesses cited price as the main reason they had chosen their cloud provider, while 40% cited greater agility and flexibility, and just 36% cited better support from the cloud vendor.
Mark Vivian added: “It’s surprising to see that so many businesses are still running Oracle E-Business on physical servers. Moving to cloud infrastructure means a shift towards greater agility, crucial for organisations to survive and thrive in response to the accelerating pace of change in today’s marketplace.”
2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on.
After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal basis for cross border data transfers.
Severe fines levied for General Data Protection Regulation (GDPR) non-compliance showed organizations that the regulation is far from toothless and that data protection authorities are not easing up just because there is an ongoing global pandemic.
What can we expect in 2021? Undoubtedly, the number of data privacy cases brought before the courts will continue to rise. That’s not necessarily a bad thing: with each case comes additional clarity and precedent on many different areas of the regulation that, to date, is open to interpretation and conjecture.
Last time I spoke to the UK Information Commissioner’s Office regarding a technicality surrounding data subject access requests (DSARs) submitted by a representative, I was told that I was far from the only person enquiring about it, and this only illustrates some of the ambiguities faced by those responsible for implementing and maintaining compliance.
Of course, this is just the GDPR. There are many other data privacy legislative frameworks to consider. We fully expect 2021 to bring full and complete alignment of the ePrivacy Regulations with GDPR, and eradicate the conflict that exists today, particularly around consent, soft opt-in, etc., where the GDPR is very clear but the current Privacy and Electronic Communication Regulation (PECR) not quite so much.
These are just inside Europe but across the globe we’re seeing continued development of data localization laws, which organizations are mandated to adhere to. In the US, the California Consumer Privacy Act (CCPA) has kickstarted a swathe of data privacy reforms within many states, with many calls for something similar at the federal level.
The following year(s) will see that build and, much like with the GDPR, precedent-setting cases are needed to provide more clarity regarding the rules. Will Americans look to replace the shattered Privacy Shield framework, or will they adopt Standard Contractual Clauses (SCCs) more widely? SCCs are a very strong legal basis, providing the clauses are updated to align with the GDPR (something else we’d expect to see in 2021), and I suspect the US will take this road as the realization of the importance of trade with the EU grows.
Other noteworthy movements in data protection laws are happening in Russia with amendments to the Federal Law on Personal Data, which is taking a closer look at TLS as a protective measure, and in the Philippines, where the Personal Data Protection Act 2021 (PDPA) is being replaced by a new bill (currently a work in progress, but it’s coming).
One of the biggest events of 2021 will be the UK leaving the EU. The British implementation of the GDPR comes in the form of the UK Data Protection Bill 2018. Aside from a few deregulations, it’s the GDPR and that’s great… as far as it goes. Having strong local data privacy laws is good, but after enjoying 47 years (at the time of writing) of free movement within the Union, how will being outside of the EU impact British business?
It is thought and hoped that the UK will be granted an adequacy decision fairly swiftly, given that historically local UK laws aligned with those inside the Union, but there is no guarantee. The uncertainty around how data transfers will look in future might result in the British industry using more SCCs. The currently low priority plans to make Binding Corporate Rules (BCR) easier and more affordable will come sharply to the fore as the demand for them goes up.
One thing is certain, it’s going to be a fascinating year for data privacy and we are excited to see clearer definitions, increased certification, precedent-setting case law and whatever else unfolds as we continue to navigate a journey of governance, compliance and security.
The global COVID-19 pandemic that hit every corner of the world forced us to reimagine our societies and reinvent the way we work and live. The Europol IOCTA 2020 cybercrime report takes a look at this evolving threat landscape.
Although this crisis showed us how criminals actively take advantage of society at its most vulnerable, this opportunistic behavior should not overshadow the overall threat landscape. In many cases, COVID-19 has enhanced existing problems.
Europol IOCTA 2020
Social engineering and phishing remain an effective threat to enable other types of cybercrime. Criminals use innovative methods to increase the volume and sophistication of their attacks, and inexperienced cybercriminals can carry out phishing campaigns more easily through crime as-a-service.
Criminals quickly exploited the pandemic to attack vulnerable people; phishing, online scams and the spread of fake news became an ideal strategy for cybercriminals seeking to sell items they claim will prevent or cure COVID-19.
Encryption continues to be a clear feature of an increasing number of services and tools. One of the principal challenges for law enforcement is how to access and gather relevant data for criminal investigations.
The value of being able to access data of criminal communication on an encrypted network is perhaps the most effective illustration of how encrypted data can provide law enforcement with crucial leads beyond the area of cybercrime.
Malware reigns supreme
Ransomware attacks have become more sophisticated, targeting specific organizations in the public and private sector through victim reconnaissance. While the pandemic has triggered an increase in cybercrime, ransomware attacks were targeting the healthcare industry long before the crisis.
Moreover, criminals have included another layer to their ransomware attacks by threatening to auction off the comprised data, increasing the pressure on the victims to pay the ransom.
Advanced forms of malware are a top threat in the EU: criminals have transformed some traditional banking Trojans into modular malware to cover more PC digital fingerprints, which are later sold for different needs.
Child sexual abuse material continues to increase
The main threats related to online child abuse exploitation have remained stable in recent years, however detection of online child sexual abuse material saw a sharp spike at the peak of the COVID-19 crisis.
Offenders keep using a number of ways to hide this horrifying crime, such as P2P networks, social networking platforms and using encrypted communications applications.
Dark web communities and forums are meeting places where participation is structured with affiliation rules to promote individuals based on their contribution to the community, which they do by recording and posting their abuse of children, encouraging others to do the same.
Livestream of child abuse continues to increase, becoming even more popular than usual during the COVID-19 crisis when travel restrictions prevented offenders from physically abusing children. In some cases, video chat applications in payment systems are used which becomes one of the key challenges for law enforcement as this material is not recorded.
Payment fraud: SIM swapping a new trend
SIM swapping, which allows perpetrators to take over accounts, is one of the new trends. As a type of account takeover, SIM swapping provides criminals access to sensitive user accounts.
Criminals fraudulently swap or port victims’ SIMs to one in the criminals’ possession in order to intercept the one-time password step of the authentication process.
Criminal abuse of the dark web
In 2019 and early 2020 there was a high level of volatility on the dark web. The lifecycle of dark web market places has shortened and there is no clear dominant market that has risen over the past year.
Tor remains the preferred infrastructure, however criminals have started to use other privacy-focused, decentralized marketplace platforms to sell their illegal goods. Although this is not a new phenomenon, these sorts of platforms have started to increase over the last year.
OpenBazaar is noteworthy, as certain threats have emerged on the platform over the past year such as COVID-19-related items during the pandemic.
VP for Promoting our European Way of Life, Margaritis Schinas, who is leading the European Commission’s work on the European Security Union, said: “Cybercrime is a hard reality. While the digital transformation of our societies evolves, so does cybercrime which is becoming more present and sophisticated.
“We will spare no efforts to further enhance our cybersecurity and step up law enforcement capabilities to fight against these evolving threats.”
EU Commissioner for Home Affairs, Ylva Johansson, said: “The Coronavirus Pandemic has slowed many aspects of our normal lives. But it has unfortunately accelerated online criminal activity. Organised Crime exploits the vulnerable, be it the newly unemployed, exposed businesses, or, worst of all, children.
The GAIA-X Initiative announced that it is one step closer to its goal of a trustworthy, sovereign digital infrastructure for Europe, with the official signing of incorporation papers for GAIA-X AISBL, a non-profit association that will take the project to the next level.
GAIA-X: A vision for Europe
The initiative’s twenty-two founding members signed the documents in Brussels to create an association for securing funding and commitment from members to fulfill the initiative’s vision for Europe.
“We are deeply motivated to meet the challenges of the European digital economy,” said Servane Augier, COO at 3DS OUTSCALE.
“Through GAIA-X, we are building, all together, a sovereign and reliable digital infrastructure and an ecosystem for innovation in Europe. In this way, we will strengthen the digital sovereignty of businesses, research and education, governments and society as a whole.”
Seeking active participation and membership
While final incorporation is pending, the founding members of GAIA-X AISBL are seeking active participation and membership from national and multi-national, European and non-European companies, as well as partners in the worlds of science and politics, who share European standards and values.
The association views its members as the primary drivers of progress and innovation, working closely together to define standards and prototype implementations from both provider and user perspectives.
“The BMW Group sees the future of automotive software in the cloud, whether it is about pioneering IT solutions for the development and production of premium vehicles, new digital services for our customers or innovative features in the car,” said Marco Görgmaier, Head of DevOps Platform and Cloud Technologies, The BMW Group.
”Participation in the GAIA-X project is a logical step in our intention to further expand our innovative strength. The goals of the GAIA-X project—striving for data sovereignty, reducing dependencies, establishing cloud services on a broad scale and creating an open ecosystem for innovation—are fully in line with our own efforts. “
Setting-up head office in Brussels
As the incorporation process is moving forward, the association will continue to set-up its head office in Brussels and establish key organizational structures.
Overall, the GAIA-X founders aim to establish a culture of trust, knowledge exchange and transparency. They anticipate that as the membership of GAIA-X grows, it will be able to have an increasing impact on innovation and collaboration in the development of technical solutions and standards for business, science and society across Europe.
A desire to remain compliant with the European Union’s General Data Protection Regulation (GDPR) and other privacy laws has made HR leaders wary of any new technology that digs too deeply into employee emails. This is understandable, as GDPR non-compliance pay lead to stiff penalties.
At the same time, new technologies are applying artificial intelligence (AI) and machine learning (ML) to solve HR problems like analyzing employee data to help with hiring, completing performance reviews or tracking employee engagement. This has great potential for helping businesses coach and empower employees (and thus help them retain top talent), but these tools often analyze employee emails as a data source. Does this create a privacy issue in regard to the GDPR?
In most cases, the answer is “no.” Let’s explore these misconceptions and explain how companies can stay compliant with global privacy laws while still using AI/ML workplace technologies to provide coaching and empowerment solutions to their employees.
Analyzing employee data with AI/ML isn’t unique to HR
First of all, many appliances already analyze digital messages with AI/ML. Many of these are likely already used by your organization and do not ask for consent from every sender for every message they analyze. Antivirus software uses AI/ML to scan incoming messages for viruses, chatbots use it to answer support emails, and email clients themselves use AI/ML to suggest responses to common questions as the user types them or create prompts to schedule meetings.
Applications like Gmail, Office 365 Scheduler, ZenDesk and Norton Antivirus do these tasks all the time. Office 365 Scheduler even analyzes emails using natural language processing to streamline the simple task of scheduling a meeting. Imagine if they had to ask for the user’s permission every time they did this! HR technologies that do something similar are not unique.
Employers also process employee’s personal data without their consent on a daily basis. Consider these tasks: automatically storing employee communications, creating paperwork for employee reviews or disciplinary action, or sending payroll information to government agencies. Employees don’t need to give consent for this. That’s because there’s a different legal basis at work that allows the company to share data in this way.
Companies do not need employee consent in this context
This isn’t an issue because the GDPR offers five alternative legal bases pursuant to which employee personal data can be processed, including the pursuit of the employer’s “legitimate interests.” This concept is intentionally broad and gives organizations flexibility to determine whether its interests are appropriate, regardless of whether these interests are commercial, individual, or broader societal benefits, or even whether the interests are a company’s own or those of a third party.
GDPR regulations single out preventing fraud and direct marketing as two specific purposes where personal data may be processed in pursuit of legitimate interest, but there are many more.
These “legitimate interest” bases give employers grounds to process personal data using AI/ML applications without requiring consent. In fact, employers should avoid relying on consent to process employee’s personal data whenever possible. Employees are almost never in a position to voluntarily or freely give consent due to the imbalance of power inherent in employer-employee relationships, and therefore the consents are often invalid. In all the cases listed above, the employer relies on legitimate interest to process employee data. HR tools fall into the same category and don’t require consent.
A right to control your inbox
We’ve established that employers can process email communication data internally with new HR tools that use AI/ML and be compliant with the GDPR. But should they?
Here is where we move from legal issues to ethical issues. Some companies that value privacy might believe that employees should control their own inbox, even though that’s not a GDPR requirement. That means letting employees grant and revoke permission to the applications that can read their workplace emails (and which have already been approved by the company). This lets the individual control their own data. Other organizations may value the benefits of new tools over employee privacy and may put them in place without employees’ consent.
I have seen some organizations create a middle ground by making these tools available to employees but requiring them to opt in to use them (rather than installing them and giving employees the option to opt out, which puts an extra burden on them to maintain privacy). This can both respect employee’s privacy and allow HR departments to use new technologies to empower individuals if they so choose. This is more important than ever in the new era of widespread work from home where we have an abundance of workplace communication and companies are charting new courses to help their employees thrive in the future of work.
Fully understanding compliance around new AI/ML tools is key to effectively rolling them out. While these solutions can be powerful and may help your employees become more self-aware and better leaders, organizations should fully understand compliance and privacy issues associated with their use in order to roll them out effectively.
Two years of the GDPR: What was achieved?
Citizens are more empowered and aware of their rights: The GDPR enhances transparency and gives individuals enforceable rights, such as the right of access, rectification, erasure, the right to object and the right to data portability. Today, 69% of the population above the age of 16 in the EU have heard about the GDPR and 71% of people heard about their national data protection authority, according to results published last week in a survey from the EU Fundamental Rights Agency. However, more can be done to help citizens exercise their rights, notably the right to data portability.
Data protection rules are fit for the digital age: The GDPR has empowered individuals to play a more active role in relation to what is happening with their data in the digital transition. It is also contributing to fostering trustworthy innovation, notably through a risk-based approach and principles such as data protection by design and by default.
Data protection authorities are making use of their stronger corrective powers: From warnings and reprimands to administrative fines, the GDPR provides national data protection authorities with the right tools to enforce the rules. However, they need to be adequately supported with the necessary human, technical and financial resources. Many Member States are doing this, with notable increases in budgetary and staff allocations. Overall, there has been a 42% increase in staff and 49% in budget for all national data protection authorities taken together in the EU between 2016 and 2019. However, there are still stark differences between Member States.
Data protection authorities are working together in the context of the European Data Protection Board (EDPB), but there is room for improvement: The GDPR established a governance system which is designed to ensure a consistent and effective application of the GDPR through the so called ‘one stop shop’, which provides that a company processing data cross-border has only one data protection authority as interlocutor, namely the authority of the Member State where its main establishment is located. Between 25 May 2018 and 31 December 2019, 141 draft decisions were submitted through the ‘one-stop-shop’, 79 of which resulted in final decisions. However, more can be done to develop a truly common data protection culture. In particular, the handling of cross-border cases calls for a more efficient and harmonised approach and an effective use of all tools provided in the GDPR for the data protection authorities to cooperate.
Advice and guidelines by data protection authorities: The EDPB is issuing guidelines covering key aspects of the Regulation and emerging topics. Several data protection authorities have created new tools, including helplines for individuals and businesses, and toolkits for small and micro-enterprises. It is essential to ensure that guidance provided at national level is fully consistent with guidelines adopted by the EDPB.
Harnessing the full potential of international data transfers: Over the past two years, the Commission’s international engagement on free and safe data transfers has yielded important results. This includes Japan, with which the EU now shares the world’s largest area of free and safe data flows. The Commission will continue its work on adequacy, with its partners around the world. In addition and in cooperation with the EDPB, the Commission is looking at modernising other mechanisms for data transfers, including Standard Contractual Clauses, the most widely used data transfer tool. The EDPB is working on specific guidance on the use of certification and codes of conduct for transferring data outside of the EU, which need to be finalised as soon as possible. Given the European Court of Justice may provide clarifications in a judgment to be delivered on 16 July that could be relevant for certain elements of the adequacy standard, the Commission will report separately on the existing adequacy decisions after the Court of Justice has handed down its judgment.
Promoting international cooperation: Over the last two years, the Commission has stepped up bilateral, regional and multilateral dialogue, fostering a global culture of respect for privacy and convergence between different privacy systems to the benefit of citizens and businesses alike. The Commission is committed to continuing this work as part of its broader external action, for example, in the context of the Africa-EU Partnership and in its support for international initiatives, such as ‘Data Free Flow with Trust’. At a time when violations of privacy rules may affect large numbers of individuals simultaneously in several parts of the world, it is time to step up international cooperation between data protection enforcers. This is why the Commission will seek authorisation from the Council to open negotiations for the conclusion of mutual assistance and enforcement cooperation agreements with relevant third countries.
GDPR: What’s next?
According to the report, in two years the GDPR has met most of its objectives, in particular by offering citizens a strong set of enforceable rights and by creating a new European system of governance and enforcement.
The GDPR proved to be flexible to support digital solutions in unforeseen circumstances such as the Covid-19 crisis. The report also concludes that harmonisation across the Member States is increasing, although there is a certain level of fragmentation that must be continually monitored. It also finds that businesses are developing a compliance culture and increasingly use strong data protection as a competitive advantage.
The GDPR has acted as a catalyst for many countries and states around the world – e.g., Chile, South Korea, Brazil, Japan, Kenya, India, Tunisia, Indonesia, Taiwan and the state of California – to consider how to modernise their privacy rules, the EC noted.
They also pointed out that it provided data protection authorities many corrective powers to enforce it (administrative fines, orders to comply with data subject’s requests, bans on processing or the suspension of data flows, etc.)
There is room for improvement, though.
“For example, we need more uniformity in the application of the rules across the Union: this is important for citizens and for businesses, especially SMEs. We need also to ensure that citizens can make full use of their rights,” noted Didier Reynders, Commissioner for Justice.
The EC also noted that stakeholders should also make sure to closely monitoring the application of the GDPR to new technologies such as AI, Internet of Things, abd blockchain.
Two years after the GDPR went into effect, official data show that Data Protection Authorities (DPAs), crippled by a lack of resources, tight budgets, and administrative hurdles, have not yet been able to create adequate GDPR enforcement.
Worse, some public authorities have grossly misused the GDPR to undermine other fundamental rights such as the right to free expression and freedom of the press, Access Now reveals.
The GDPR’s first two years have been marked by crisis, whether internal, external, political, geopolitical, or administrative. Beyond enforcement challenges, the report explores how these crises have impacted the protection of personal data in the EU, taking a close look at both Brexit and the COVID-19 outbreak.
“Through this report, we raise the alarm to the EU institutions and Data Protection Authorities that it’s high time to act to enforce the GDPR and condemn its misuses,” said Estelle Massé, Senior Policy Analyst and Global Data Protection Lead at Access Now.
“The European Union may have the best law in the world for the protection of personal data, but if it is not enforced, it risks being as useful as a chocolate teapot.”
The GDPR remains a strong framework, and if authorities take urgent action, it can go a long way in defending people’s fundamental rights.
GDPR around the world
From May 2018 to March 2020, authorities levied 231 fines and sanctions while as many as 144,376 complaints were filed between May 2018 and May 2019.
Out of 30 DPAs from all 27 EU countries, the United Kingdom, Norway, and Iceland, only nine said they were happy with their level of resourcing. The inadequate budget provided to DPAs means that our rights may not be effectively protected. In fact, it may create a negative incentive for DPAs investigating large tech companies to agree on settlements that may be more favorable to the companies. This is reinforced by the huge disparity of resources between data protection authorities and companies they oversee.
In Poland, Romania, Hungary, and Slovakia, courts and authorities have been abusing the GDPR to curtail investigative journalism or target civic tech NGOs by trying to force outlets to reveal their sources.
The GDPR is a robust tool to guide officials and public health authorities in the response to the COVID-19 crisis. Access Now condemns Hungary’s disproportionate decision to limit the application of GDPR rights during the COVID-19 crisis as it gravely endangers people’s right to data protection at a time when our personal information, including our health data, is being collected perhaps more than ever.
Enforcement challenges and the UK’s insistence on lowering current standards through the Brexit talks have implications for any future negotiations of a so-called adequacy decision between the EU and the UK that would authorize the transfer of data between the two jurisdictions.
Governments across the EU must increase the financial and human resources allocated to Data Protection Authorities, including technical staff, so that they can function properly and be able to address the large number of complaints.
The European Commission should launch infringement procedures against EU states:
- When they do not provide sufficient resources to Data Protection Authorities, or
- When they do not guarantee the Data Protection Authority independence in status and in practices, or
- Where Data Protection Authorities or courts misuse the GDPR to restrict freedom of the press or stifle civil society’s work.
Data Protection Authorities must not misuse the GDPR, as they hold much of the responsibility for the GDPR’s success or failure. It is absolutely unacceptable that DPAs misuse the GDPR to undermine civil society, restrict freedom of the press, or otherwise violate human rights.
May 25th is the second anniversary of the General Data Protection Regulation (GDPR) and data around compliance with the regulation shows a significant disconnect between perception and reality.
Only 28% of firms comply with GDPR; however, before GDPR kicked off, 78% of companies felt they would be ready to fulfill data requirements. While their confidence was high, when push comes to shove, complying with GDPR and GDPR-like laws – like CCPA and PDPA – are not as easy as initially thought.
Data privacy efforts
While crucial, facing this growing set of regulations is a massive, expensive undertaking. If a company is found out of compliance with GDPR, it’s looking at upwards of 4% of annual global turnover. To put that percentage in perspective, of the 28 major fines handed down since the GDPR took effect in May 2018, that equates to $464 million dollars spent on fines – a hefty sum for sure.
Additionally, there is also a cost to comply – something nearly every company faces today if they conduct business on a global scale. For CCPA alone, the initial estimates for getting California businesses into compliance is estimated at around $55 billion dollars, according to the State of California DoJ. That’s just to comply with one regulation.
Here’s the reality: compliance is incredibly expensive, but not quite as expensive as being caught being noncompliant. This double-edged sword is unfortunate, but it is the world we live in. So, how should companies navigate in today’s world to ensure the privacy rights of their customers and teams are protected without missing the mark on any one of these regulatory requirements?
Baby steps to compliance
A number of companies are approaching these various privacy regulations one-by-one. However, taking a separate approach for each one of these regulations is not only extremely laborious and taxing on a business, it’s unnecessary.
Try taking a step back and identifying the common denominator across all of the regulations. You’ll find that in the simplest form, it boils down to knowing what data you actually have and putting the right controls in place to ensure you can properly safeguard it. Implementing this common denominator approach can free up a lot of time, energy and resources dedicated to data privacy efforts across the board.
Consider walking through these steps when getting started: First, identify the sensitive data being housed within systems, databases and file stores (i.e. Box, Sharepoint, etc.). Next, identify who has access to what so that you can ensure that only the right people who ‘should’ have access do. This is crucial to protecting customer information. Lastly, implement controls to keep employee access updated. Using policies to keep access consistent is important, but it’s crucial that they are updated and stay current with any organizational changes.
Staying ahead of the game
The only way to stay ahead of the numerous privacy regulations is to take a general approach to privacy. We’ve already seen extensions on existing regulations, like The California Privacy Rights and Enforcement Act of 2020. ‘CCPA 2.0’ as some people call it, would be an amendment to the CCPA. So, if this legislation takes effect, it would create a whole new set of privacy rights that align well with GDPR, putting greater safeguards around protecting sensitive personal information. It’s my opinion that since the world has begun recognizing privacy rights are more invaluable than ever, that we’ll continue to see amendments piggybacking on existing regulations across the globe.
While many of us have essentially thrown in the towel, knowing that our own personal data is already out there on the dark web, it doesn’t mean that we can all sit back and let this continue to happen. Considering, this would be to the detriment of our customers’ privacy, cost-prohibitive and ineffective.
So, what are the key takeaways? Make your data privacy efforts just as central as the rest of your security strategy. Ensure it is holistic and takes into account all facts and overlaps in the various regulations we’re all required to comply with today. Only then do you stand a chance at protecting your customers and your employees’ data and dodge becoming another news headline and a tally on the GDPR fine count.
As many organizations are still discovering, compliance is complicated. Stringent regulations, like the GDPR and the CCPA, require multiple steps from numerous departments within an enterprise in order to achieve and maintain compliance. From understanding the regulations, implementing technologies that satisfy legal requirements, hiring qualified staff and training, to documentation updating and reporting – ongoing compliance can be costly and time intensive.
In fact, a report found that one-third of all enterprises (defined as businesses with 1000+ employees) spent more than $1 million on GDPR compliance alone.
As more states move to adopt GDPR-like regulations, such as California’s CCPA and Washington’s failed, but not forgotten Washington Privacy Act (WPA) legislation, organizations are having to look very closely at their data sets and make critical decisions to ensure compliance and data security.
But what can be done to minimize the scope of these stringent and wide-reaching regulations?
If an organization can identify all of its personal data, take it out of the data security and compliance equation completely – rending it useless to hackers, insider threats, and regulation scope – it can eliminate a huge amount of risk, and drastically the reduce the cost of compliance.
Enter synthetic data
Organizations like financial institutions and hospitals handle large quantities of extremely sensitive credit/debit card and personally identifiable information (PII). As such, they must navigate a very stringent set of compliance protocols – they can fall under the GDPR, CCPA, PCI DSS and additional laws and regulations depending on their location and the location of their customers.
Synthetic data is helping highly regulated companies safely use customer data to increase efficiencies or reduce operational costs, without falling under scope of stringent regulations.
Synthetic data makes this possible by removing identifiable characteristics of the institution, customer and transaction to create what is called a synthetic data set. Personally identifiable information is rendered unrecognizable by a one-way hash process that cannot be reversed. A cutting-edge data engine makes minor and random field changes to the original data, keeping the consumer identity and transaction associated with that consumer completely protected.
Once the data is synthetized, it’s impossible for a hacker or malicious insider to reverse-engineer the data. This makes the threat of a data breach a non-issue for even the largest enterprises. Importantly, this synthetic data set still keeps all the statistical value of the original data set, so that analysis and other data strategies may be safely conducted, such as AI algorithm feeding, target marketing and more.
What do the major data privacy regulations say about synthetic data
The CCPA does not expressly reference synthetic data, but it expressly excludes de-identified data from most of the CCPA’s requirements in cases where the requisite safeguards are in place. Synthesized data as defined is considered de-identified data. The CCPA also excludes from its coverage personal information subject to several federal privacy laws and comparable California state laws, including “personal information collected, processed, sold, or disclosed pursuant to Gramm-Leach-Bliley Act (GLBA) and the California Financial Information Privacy Act.”
Likewise, the GDPR does not expressly reference synthetic data, but it expressly says that it does not apply to anonymous information: according to UCL, “information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” Synthetic data is considered personal data which has been rendered anonymous and therefore falls outside the material scope of the GDPR.
Essentially, these important global regulatory mandates do not apply to collection, storage and use of synthesized data.
A big solution for big struggles
As businesses continue to grow in size and number of customers, the amount and frequency of data that flows in also increases dramatically. With these vast streams of data comes a struggle to collect, store and use customer data in a private and secure manner. This struggle is also becoming more publicly known, as headlines of data breaches or compliance violations flood news feeds seemingly every week.
To effectively and efficiently manage the influx of sensitive data while staying compliant and secure, companies can implement synthetic data in their environments with zero risks. Companies can use synthetic data to dig into customer action likelihood, analytics, customer segmentation for marketing, fraud detection trends, and more without jeopardizing compliance or data privacy.
And with data being the key to actualizing machine learning and artificial intelligence engines, companies can also utilize synthetic data to gain valuable insights into their algorithm data and design new products, reduce operational costs, and analyze new business endeavors while keeping customer privacy intact.
With the GDPR and the CCPA now in full effect and more industry and region-specific data regulations on the horizon, organizations of all sizes that want to reduce the burden of compliance will look to use synthetic data technology to manage their privacy and data security-related legal obligations.
Synthetic data helps organizations in highly regulated industries put customer data security and privacy first and keep their data operations frictionless and optimized while minimizing the scope of compliance. The more organizations that adopt synthetic data, the safer personal information transactions become, and the more organizations are free to conduct business without having to worry about regulation.
An IT startup has developed a novel blockchain-based approach for secure linking of databases, called ChainifyDB.
“Our software resembles keyhole surgery. With a barely noticeable procedure we enhance existing database infrastructures with blockchain-based security features. Our software is seamlessly compatible with the most common database management systems, which drastically reduces the barrier to entry for secure digital transactions,” explains Jens Dittrich, Professor of Computer Science at Saarland University at Saarbrücken, Germany.
How does ChainifyDB work?
The system offers various mechanisms for a trustworthy data exchange between several parties. The following example shows one of its use cases.
Assume some doctors are treating the same patient and want to maintain his or her patient file together. To do this, the doctors would have to install the Saarbrücken researchers’ software on their existing database management systems. Then, they could jointly create a data network.
In this network, the doctors set up a shared table in which they enter the patient file for the shared patient. “If a doctor changes something in his table, it affects all other tables in the network. Subsequent changes to older table states are only possible if all doctors in the network agree,” explains Jens Dittrich.
Another special feature: If something about the table is changed, the focus is not on the change itself, but on its result. If the result is identical in all tables in the network, the changes can be accepted. If not, the consensus process starts again.
“This makes the system tamper-proof and guarantees that all network participants’ tables always have the same status. Furthermore, only the shared data in the connected tables is visible to other network participants; all other contents of the home database remain private”, emphasizes Dr. Felix Martin Schuhknecht, Principal Investigator of the project.
Advantages for security-critical situations
The new software offers advantages especially for security-critical situations, such as hacker attacks or when business partners cannot completely trust each other. Malicious participants can be excluded from a network without impairing its functionality.
If a former participant is to be reinstated, the remaining network participants only have to agree on a “correct” table state. The previously suspended partner can then be set to this state. “As far as we know, this function is not yet offered by any comparable software,” adds Dittrich.
In order to bring ChainifyDB to market, the German Federal Ministry of Education and Research is supporting the Saarbrücken researchers’ start-up, which is currently being founded, with 840,000 euros.
The information security landscape seems to evolve at a faster clip each year. The deluge of ever-changing threats, attack techniques and new breaches making headlines can be challenging to track and assess. That’s why each year the WatchGuard Threat Lab takes a step back to assess the world of cyber security and develop a series of predictions for what emerging trends will have the biggest impact.
Following the worldwide controversy over hacking that influenced the 2016 presidential election and the many widely publicized privacy and security incidents that have taken place since, we believe the government information security sphere is the stage upon which we’ll see two major security developments play out in 2020.
The first is that bad actors will target voter registration systems with the intent to generate voting havoc and trigger voter fraud alerts. The second is that we’ll see multiple states enact privacy regulations inspired by GDPR and the CCPA. Let’s take a look at how these two issues will unfold in 2020 and what you need to know to be prepared.
Impending voter registration systems hacks
Security researchers have proven many times over that voting machines are hackable, but most of them don’t expect threat actors to expend the vast amount of time and resources needed to successfully hack the 2020 presidential election voting results directly. Instead, these online adversaries will use subtler tactics in the coming months to tamper with the voting process at the state and local level.
The culprits behind previous election-related attacks are state-sponsored actors that are happy to execute highly effective, politically motivated misinformation campaigns across social media platforms, but appear to draw the line at actually altering the voting results themselves. In 2020, they’ll seek to build on the success they achieved in 2016. We believe they will target US voter registration systems to make it more difficult for legitimate voters to cast their ballot and attempt to cause widespread mistrust in the validity of vote counts. Indirectly influencing the election by creating confusion, fear, uncertainty and doubt will be their MO.
What can we do about it? For state and local government departments managing voter registration systems it will be important to perform security audits and find and fix potential vulnerabilities before the bad guys have a chance to exploit them.
While there’s not a tremendous amount the average voter can do to ward off election hacking attempts by state-sponsored cyber criminals, there are some basic things you should keep in mind to make sure your voice is heard on election day. First, double-check the status of your voter registration at least a week before the election. Monitor the news for any updates about voter registration database hacks leading up to the election and be sure to contact your local state voter authority if you’re concerned. Lastly, bring a printed confirmation of your completed voter registration and multiple forms of ID on election day (just in case).
An upsurge in state-level privacy legislation
The European Union made a global splash when it implemented the GDPR. Designed to provide better privacy for its citizens’ data (regardless of the location of the organizations with access to it), the historic law was initially met with cynicism and uncertainty (and even panic in some cases) due to its stringent criteria and heavy penalties for noncompliance.
That said, since its inception, the level of privacy the law provides for individuals has been well-received. People welcome the comfort of knowing that organizations are finally being incentivized to protect their privacy and held accountable for mishandling their data. It goes a long way to inspire confidence in the public when organizations like Google and Marriott are fined millions of euros for GDPR violations.
Massive organizations like Facebook continue to neglect their obligation to safeguard user data and America’s appetite for privacy seems to be growing with each passing data breach and scandal involving the sale of user data. That’s why in 2020 you should expect to see 10 or more states to enact privacy laws similar to GDPR.
In fact, California has already passed its own CCPA and will begin rolling out fines for violations by mid-year. Given that most states passed mandatory data breach disclosure laws in the mid-2000s and lawmakers still haven’t been able to pass a federal version to date, it’s unlikely that the movement to enact a federal privacy law will gain enough steam to pass in the near term. That said, the rising public outcry for data privacy makes it highly likely that individual states will take it upon themselves to follow in California’s footsteps and pass privacy acts of their own.
This momentum will grow in 2020, so it will be critical for businesses across the country to carefully study the CCPA requirements and prepare to make adjustments. Other states will use the CCPA as a reference point for developing similar regulations of their own. If you’re concerned with your own personal data privacy, contact your local representatives to push for state-level legislation and federal action as well.
The road ahead
The changing conditions within the government information security landscape impact every American business and individual in one way or another. We simply can’t afford to be ignorant or apathetic when it comes to matters of public privacy and security.
Whether it be state-sponsored attempts to interfere with the next election, emerging security and privacy regulations, or some other development, we should all strive to become more informed about and engaged in these issues.
Innovation in cybersecurity is a key enabler to facilitate progress in the NIS industry, boost employment in the cybersecurity sector and growth of EU GDP. ENISA published a report that analyses the current landscape for supporting innovation in cybersecurity in the EU.
The study presents good practices and challenges from the Member States whilst trying to execute innovation as a strategic priority of their National Cyber Security Strategies (NCCS).
“The CSA, the NIS Directive and the GDPR incentivised innovation in relevant areas of cybersecurity and data protection. To encounter current and emerging cybersecurity risks and threats, EU Member States need to strengthen and adjust their national capabilities by developing innovative solutions and objectives under their NCSS,” said Juhan Lepassaar, Executive Director of ENISA.
Different approaches to innovation
Member States follow different approaches to support innovation in the context of National Cyber Security Strategies. In some cases, Member States promote the creation of new skills and capabilities around digital competences.
In other cases, they create networks of stakeholders giving them a mandate on innovation. These networks are either government driven, such as INCIBE, the National Cybersecurity Agency in Spain or industry driven, such as Cyber Ireland. Innovation activities are also driven by national institutions and research centres such as NASK Poland.
Governments should align with industry needs
There is difficulty for governments to understand the needs of the industry, as well as to develop expertise in dealing with Public Private Partnerships.
To align with industry needs and identify opportunities for adopting or commercialising research outcomes, Member States need to involve industry directly in research and innovation activities.
Sector specific innovation priorities are needed
Dedicated funding mechanisms and initiatives often focus on varied research and innovative objectives rather than being specific on cybersecurity. Supporting and developing sector specific innovation priorities is important for coordinating alternative funding mechanisms and develop a sectorial approach to innovation in cybersecurity.
It is necessary to take into account different cybersecurity needs across sectors and develop sector specific innovation priorities both at National and EU level.
Lengthy procurement processes
Lengthy procurement processes prevent SMEs and innovative companies such as start-ups to offer their services to the public sectors. Supporting adequate level of funding and providing economic incentives such as tax incentives may accelerate the adoption of new technologies, products and services.
The Swedish Innovation Agency allocates a large amount of funds for innovation in cybersecurity.
Geographical clusters support innovation
Geographical clusters are important mechanisms that support innovation. There are several initiative that bring people together, such as the Brussels initiative on Cybersecurity Innovation.
How to enhance trust for users
Promoting EU level certification of services/products would enhance trust for users within the EU and provide a stamp of approval for international markets.
EU Member States have identified risks and vulnerabilities at national level and published a joint EU risk assessment. Through the toolbox, the Member States are committing to move forward in a joint manner based on an objective assessment of identified risks and proportionate mitigating measures.
Toolbox measures and supporting actions
“Europe has everything it takes to lead the technology race. Be it developing or deploying 5G technology – our industry is already well off the starting blocks. Today we are equipping EU Member States, telecoms operators and users with the tools to build and protect a European infrastructure with the highest security standards so we all fully benefit from the potential that 5G has to offer,” said Thierry Breton, Commissioner for the Internal Market.
Coordinated implementation of the toolbox
While market players are largely responsible for the secure rollout of 5G, and Member States are responsible for national security, 5G network security is an issue of strategic importance for the entire Single Market and the EU’s technological sovereignty.
Closely coordinated implementation of the toolbox is indispensable to ensure EU businesses and citizens can make full use of all the benefits of the new technology in a secure way.
5G will play a key role in the future development of Europe’s digital economy and society. It will be a major enabler for future digital services in core areas of citizens’ lives and an important basis for the digital and green transformations.
With worldwide 5G revenues estimated at €225 billion in 2025, 5G is a key asset for Europe to compete in the global market and its cybersecurity is crucial for ensuring the strategic autonomy of the Union.
Billions of connected objects and systems are concerned, including in critical sectors such as energy, transport, banking, and health, as well as industrial control systems carrying sensitive information and supporting safety systems.
At the same time, due to a less centralized architecture, smart computing power at the edge, the need for more antennas, and increased dependency on software, 5G networks offer more potential entry points for attackers.
Cyber security threats are on the rise and become increasingly sophisticated. As many critical services will depend on 5G, ensuring the security of networks is of highest strategic importance for the entire EU.
Secure 5G networks: EU toolbox conclusions
The Member States, acting through the NIS Cooperation Group, have adopted the toolbox. The toolbox addresses all risks identified in the EU coordinated assessment, including risks related to non-technical factors, such as the risk of interference from non-EU state or state-backed actors through the 5G supply chain.
In the toolbox conclusions, Member States agreed to strengthen security requirements, to assess the risk profiles of suppliers, to apply relevant restrictions for suppliers considered to be high risk including necessary exclusions for key assets considered as critical and sensitive (such as the core network functions), and to have strategies in place to ensure the diversification of vendors.
While the decision on specific security measures remains the responsibility of Member States, the collective work on the toolbox demonstrates a strong determination to jointly respond to the security challenges of 5G networks.
This is essential for a successful and credible EU approach to 5G security and to ensure the continued openness of the internal market provided risk-based EU security requirements are respected.
The Commission will support the implementation of an EU approach on 5G cybersecurity and will act, as requested by Member States, using, where appropriate, all the tools at its disposal to ensure the security of the 5G infrastructure and supply chain:
- Telecoms and cybersecurity rules
- Coordination on standardization as well as EU-wide certification
- Foreign direct investment screening framework to protect the European 5G supply chain
- Trade defense instruments
- Competition rules
- Public procurement, ensuring that due consideration is given to security aspects
- EU funding programs, ensuring that beneficiaries comply with relevant security requirements.
The 10 top trends that will drive the most significant technological upheavals this year have been identified by Access Partnership.
“Shifts in tech policy will disrupt life for everyone. While some governments try to leverage the benefits of 5G, artificial intelligence, and IoT, others find reasons simply to confront Big Tech ranging from protectionism to climate urgency.
“Techlash trends highlighted in our report lay bare the risks of regulatory overreach: stymied innovation and economic growth for some and an unfair advantage for others,” said Greg Francis, Managing Director at Access Partnership.
Report highlights: Top policy trends for 2020
- AI regulation taking shape in the EU and the U.S.
- EU-based Digital Services Act (DSA) as the newest power grab since the GDPR
- New wave of tech protectionism in Europe
- China as a supply chain liability; other Asian nations filling in
- Spectrum sharing likely to become more mainstream with 5G
- 5G security to take an important position with shift to control functions
- U.S. privacy laws taking bipartisan note from California’s CCPA
- Data sharing regs to heat up, as balance with innovation becomes more critical
- IoTs, SIMs and eSIMs: who’s responsible for setting regulation?
- Rise of ‘green’ technology policy: another balancing act with industry emissions vs. the industry’s potential ability to solve climate change
Francis continued: “In just one year, we’ve seen dramatic changes in the regulatory and policy landscape for technology companies, originating in Europe but deeply affecting U.S. and other major global players.
“The report notes that while divisive impeachment proceedings in America create a blockage in new legislation pipelines, there is surprising bipartisan agreement on tech policy — Republicans are moving to protect companies from growth-killing regulation, and Democrats are seeking to pre-empt state-level measures.
“We expect to see new regulatory models emerging in the U.S. and other nations in reaction to the EU’s push for digital sovereignty.”
The California Consumer Privacy Act became effective on the first day of 2020 and will affect millions of consumers and tens of thousands of companies.
The advent of the CCPA and other similar regulations marks a sea change in how companies need to manage data and consumer privacy. As with many other regulations, organizations may view CCPA as a compliance burden. Forward-thinking companies, though, may view the law as an opportunity to increase customer personalization.
The CCPA is considered to be the most comprehensive of any state privacy law. In 2018, the General Data Protection Regulation (GDPR), the biggest remake of data privacy rules affecting European citizens in more than 20 years, required similar actions.
Effectively, the CCPA gives regulatory power to the individual because consumers can opt out of having their data sold and have the right to be forgotten. To achieve compliance, companies need to adjust, and switch from a one-size-fits all mindset on data management to a highly agile, personal approach for consumer data.
This means that companies need data policies that sit with and follow the data so that a consumer can opt to share one piece of data about themselves with company A, but not company B, and another piece of data about themselves for some purposes, but not others.
The data about the data
Being able to do that sounds like a daunting prospect, and it is: legacy technologies and ways of handling data can’t do it. But next generation database technologies allow companies of all sizes to get specific with data.
In effect, the CCPA and the GDPR require companies to have a 360-degree view of their data. Achieving that means breaking data out of silos and integrating it in a central hub where it can be governed according to consistent policies and accessed appropriately.
This governance and management of data depends on being able to also manage metadata. Metadata is the data about the data. When that metadata sits with the data – versus a separate and disconnected repository of data rules – companies get to the granular level that new regulations require.
For example, if data includes an email address, companies need metadata spelling out what consent has been given for its use. If consent is given for it to be used for billing but not for marketing, the data hub makes it available only for billing and not for marketing.
It’s difficult to ensure trust and accountability in data when data is sourced from different silos and applied to many different use cases. However, when governance policies regarding such things as restricted access to personal information are embedded in a central data hub, they can be applied to any use case, ensuring that the data is always fit for purpose. This allows for more standardized, automated and audit-able application of data governance policies, without having to educate everyone in your organization every time a policy changes.
Companies can pursue these changes incrementally, as well. Metadata can be attached to data even if the data stays in a silo. A bank, for instance, can consolidate a metadata index around customer data for a loan, and the policies regarding privacy will stay with the data in that case, too.
Metadata also enables companies to know where data came from, when it arrived, if, how and when it was changed, and who changed it. This provides the context necessary for an accurate view.
Lean into data regulations
Because use cases, regulations and policies change frequently, modern data management systems provide the flexibility to support changing regulations and business needs. But whether companies change incrementally or with a new data structure, they’ll sell themselves short if they only get into compliance with CCPA and GDPR.
Rather than view data as a regulatory compliance liability, leading enterprises succeeding at addressing regulatory compliance look at data as an asset and regulations as compelling events to better leverage those assets. Indeed, when companies know their customers’ preferences so well that they can abide by detailed privacy preferences, they know their customers very well.
That knowledge will enable them to tailor offers, extend targeted services and provide suggestions to the consumer like never before. Also, that company will be well positioned to confidently share data to further enable personalization of the customer experience.
Companies that achieve a better customer experience – more personalized and more enduring relationships based on transparency and trust—will experience a big upside. Eight in ten consumers say they’re more likely to do business with a company if it offers personalized experiences, and nine in ten people find personalization appealing, indicates research from Epsilon.
Forward leaning posture
In the old days, TV networks blasted everyone with the same commercials. Now, commercials are targeted toward preferences that consumers signal by where they shop, what they buy, when they go online, what they watch, what they listen to, and many more data points collected behind the scenes.
For the most part, companies have had a free ride collecting and using that consumer data. Now, that free ride is ending. The GDPR alone is estimated to impact 740 million consumers. According to the Internet Association, a trade association for Internet companies that is pushing for more consistent federal regulation, 29 US states have now passed laws related to data privacy, and there’s no doubt more will come.
Companies that shift from a defensive crouch regarding regulatory requirements and adopt a forward leaning posture will create a platform that’s respectful of consumer data and wishes—and mindful of how consumers shop and the services they seek. This will be a winning formula for both consumer and company.
Work around data seems to never end. Between collection, sharing and use – the burden of this falls onto the shoulders of the CISO, the broadness of which, seems to be increasing year-on-year. The question that must be asked is, can we expect the CISO to prosper when the essence of data itself seems to be out of control?
Complex issues can be broken down into simpler parts to help resolve them. This may be true in the case of data security. In terms of the state of data, it can be reduced to who owns data, aka, data ownership equates to data control.
This approach then leads to more nuanced layers of consideration:
- When does data ownership change hands?
- How does data ownership impact security choices?
- What aspects of data governance and regularity compliance are affected?
In turn, further layers of the data onion will peel away to reveal more questions, such as, who owns the responsibility in complying with data regulations? And where does the responsibility for data security actually lie? If a customer uploads an image to your site – who owns that image? And, who is responsible for keeping it safe?
These questions open a moral dilemma around data security responsibility – and nuanced questions can lead to fuzzy answers. Any fuzziness in ownership can be used to off-set responsibility. If a CISO is swamped with work, it is a natural next step to ‘pass the buck’. Understanding the finer aspects and nature of data can give us a more detailed analysis to work from.
Where the data buck stops?
The data ownership vs. data processing dichotomy is a great place to understand where the data buck stops. It can help to use the GDPR principles around data. Article 4 of the GDPR provides the definitions of data processing and control to allocate responsibility; whilst the two are intrinsically linked and there may be some overlap, you can say:
Data controller: Referring to Article 5 of the GDPR sets out the data controller must act in a manner of “lawfulness, fairness and transparency”. The data subject rights such as data consent and access rights are under controller remit. Controllers should also protect the accuracy and confidentiality of personal data. In doing so, the controller will need to ensure the data processor is up to the job.
Data processor: This is an entity that processes the data on behalf of the controller. For example, if a user removes consent, then the controller will handle this request, but the processor would be responsible for removing the data from their servers. What is important to note is that a data processor has a strong security perspective; however, if cloud providers aren’t exposed to data, they won’t be labeled processors under GDPR.
The CISO may similarly have to set up their own “internal GDPR” equivalent to delegate ownership and help share data responsibility. The data onion, as you can see has many layers.
How data ownership contracts can help
The CISO is not an island and vendors are part of the data lifecycle. The data onion has touchpoints across technology, legal, and social. The legal argument can be headed off using Data Ownership Contracts with your vendors. Cloud vendors, for example, may offer these types of contracts. The contracts typically have clauses that cover data privacy and security. Various aspects of data protection are handled by these contracts, this should include:
1. Technological measures used to protect data.
2. Data breach notification procedures.
3. Compliance with any data protection regulations, such as GDPR and industry specific ones.
4. Third-party liabilities (this is the extended vendor ecosystem which adds yet another layer to the data lifecycle).
5. Data breach indemnity support.
The bottom line here is that there are many cogs in the ‘data lifecycle wheel’ and contracts can go so far. However, we must be able to address the underlying issue of data ownership and movement to ensure all parts of this data lifecycle wheel are well lubricated.
The work of the CISO is never done but data governance can help
Cloud computing, the regulatory landscape, and changing customer expectations have changed data security choices and needs. The CISO has to fit all of these moving parts together and keep everyone happy.
Gartner has predicted that through 2025, 99% of cloud security failures will be the customer’s fault. They recommend that CIOs can combat this by implementing and enforcing policies on cloud ownership, responsibility, and risk acceptance. In addition, 60% of enterprises with proper Cloud Governance will see one-third fewer security incidents.
Layers of cloud complexity for the CISO
Whilst cloud computing is important, the way it is being handled creates further complexity. By using a SaaS offering, IT infrastructure giants, like VMWARE and IBM, organizations basically initiate vendor lock-ins as they migrate to their cloud infrastructure and embrace the multi-cloud ethos.
The result is that the CISO must encompass ‘cloud-thinking’ into a model of security that embraces the cloud, SaaS, vendor ecosystems, compliance requirements, and customer needs. The data onion has many layers but having a comprehensive security approach that utilizes encryption across a broad spectrum of at rest, in transit, and in use, can take care of the vagaries of data ownership, preventing a ‘pass the buck’ culture.
We’re all actors in the data protection play
Protecting data that is at rest, in transit, and in use covers the spectrum of our ‘theatre of data’. A broad-brush approach to security covers the bases of the entire cast in the data protection play. This approach can give us the technological tools to protect data no matter who owns it, where it resides, or where it ends up. The ownership of data in a world where cloud and SaaS are ubiquitous is complicated with many stakeholders.
The CISO can turn this on its head by using encryption across the data lifecycle, no matter where the data goes, where it is stored, how it is used, if the encryption is part of the whole journey of the data, ownership becomes mute.
In those five years, Google received some 3.2 million requests to delist URLs, from approximately 502,000 requesters, and decided to delist 45% of those URLs.
Balancing individual privacy and public interest
“Each delisting decision requires careful consideration of the balance between respecting user privacy and ensuring open access to information via Google Search,” noted Elie Bursztein, who leads Google’s security and anti-abuse research team.
It shouldn’t come as a surprise, then, that requests asking for the delisting of pages containing information on political activities and professional wrongdoing (this includes negative reviews) are fulfilled at a much lower rate than requests concerning personal and sensitive personal information:
Interestingly enough, almost a quarter of all requests are for the delisting of pages containing professional information, but those often go unfulfilled.
“The relatively low rate for requested professional information delistings (20.7%) is best explained by the fact that many of those requests pertain to information that is directly relevant or connected to the requester’s current profession, and is therefore in the public interest to be indexed by Google Search,” Bursztein explained.
Other interesting findings
Google’s analysis also revealed that:
- After an initial delisting request frenzy during the first year after the ruling, the number of delisting requests has stabilized at about 47,000 per month.
- Initially, it took Google 85 days (on average) to decide whether to honor a delisting request or not. In 2019, that number fell to 6 days.
- Most requests are submitted by private individuals (84%).
- The top 10,000 requesters are responsible for 34% of the URL delisting requests.
- The news, government, social media, and directory sites are most frequently targeted for delisting.
- Internet users in some European countries take more advantage of the Right to Be Forgotten that those in others. The reasons for this disparity? Differences in attitudes towards privacy, media norms, and (likely) knowledge of the RTBF process. Also, RTBF requests mostly target local content.
In the light of the General Data Protection Regulation (GDPR), the challenge of proper application of pseudonymisation to personal data is gradually becoming a highly debated topic in many different communities, ranging from research and academia to justice and law enforcement and to compliance management in several organizations across Europe.
Pseudonymisation and personal data challenges
The ENISA “Pseudonymisation techniques and best practices” report, amongst other, especially discusses the parameters that may influence the choice of pseudonymisation techniques in practice, such as data protection, utility, scalability and recovery.
It also builds on specific use cases for the pseudonymisation of certain types of identifiers (IP address, email addresses, complex data sets).
There is no easy solution
One of the main outcomes of the report is that there is no single easy solution to pseudonymisation that works for all approaches in all possible scenarios.
On the contrary, it requires a high level of competence in order to apply a robust pseudonymisation process, possibly reducing the threat of discrimination or re-identification attacks, while maintaining the degree of utility necessary for the processing of pseudonymised data.
58% of surveyed businesses worldwide failed to address requests made from individuals seeking to obtain a copy of their personal data as required by GDPR within the one-month time limit set out in the regulation, reveals updated research from Talend.
GDPR compliance rate: 2018 and now
In September 2018, Talend released the results of its first GDPR research benchmark, which was aimed to assess the ability of organizations to achieve right to access and portability compliance with the European regulation. At that time, 70% of the companies surveyed reported they had failed to provide an individual’s data within one month.
One year later, Talend surveyed a new population of companies, as well as the companies which reported a failure to comply in the first benchmark, in order to map improvement. Although the overall percentage of companies who reported compliance increased to 42%, the rate remains low 18 months after the regulation came into force.
“These new results show clearly that Data Subject Access Rights is still the Achilles’ heel of most organizations,” said Jean-Michel Franco, Senior Director of Data Governance Products at Talend. “To fully comply with GDPR it is necessary to understand where the data is, how it is processed and by whom, as well as ensure that the data is trusted.”
Organizations are struggling to meet requests
The research revealed that only 29% of the public sector organizations surveyed could provide the data within the one-month limit. With an increasing use of data and new technologies – facial recognition, artificial intelligence – by the public sector to improve the citizen experience, the need for more integrated data governance is a must-have for 2020 and beyond.
The same observation applies to companies in the media and telecommunications industries. Only 32% of these organizations reported that they could provide the correct data on time.
Many firms barely reach an average success rate
Compared to last year, retail companies improved their success rate with 46% of such companies reporting they provided correct responses within the one-month limit. A greater proportion of companies in this industry started to take a customer-centric approach to both improve the experience and internal processes.
The same situation occurs with organizations in finance as well as in travel, transport, and hospitality industries. In addition, the latter are considered as the best performers as companies in that industry represent 38% of all the organizations who provided data in less than 16 days.
The lack of automation remains a barrier to success
One take-away from this new benchmark is the lack of automation in processing requests. One of the main reasons companies failed to comply was the lack of a consolidated view of data and clear internal ownership over pieces of data. In the financial services industry, for example, clients may have multiple contracts with a company that may not be located in one place making it difficult to retrieve all necessary information.
Processing the requests thus remains very manual and often Involves the business users, e.g. the insurance representatives in the case of an insurance company. In addition, processing Subject Right Requests can be very costly; according to a recent Gartner survey, companies “spend, on average, more than $1,400 to answer a single SRR.”
ID proof and requesting process should be improved
The research also highlights the lack of an ID check during the data request process of the individual requesting data. Overall, only 20% of the organizations surveyed asked for proof of identification. Moreover, of the companies surveyed that reported asking for proof of identification, very few use an online and secure way of sharing ID documents. Instead, most of the time, copies of identification were provided by email. The requesting process also remains cumbersome with reported difficulties including finding the right email address to send the request, and follow up emails because the data is incomplete or because the files can’t be opened.