50 roles shifted off to India
DXC Technology is sending hundreds of security personnel from the America’s division down the redundancy chute and offshoring some of those roles to low-cost centres, insiders are telling us.
As revealed by The Register at the back end of March, the outsourcing badass cum cloud-wannabe confirmed the security practice within the Offering division needs to purge $60m in expenses in the current fiscal ’20 that began on 1 April.
A chunk of that is to be generated by redundancies, with some 300 people – 45 per cent of the US’s security team – being laid off. We are also told that 50 roles are being moved to India, but it is not clear if other roles will move centres in the Philippines, Vietnam and Eastern Europe.
Teams across DXC Security in Data Protection and Privacy, Security Incident Event Management, Technical Vulnerability Management, and Security Risk Management are all impacted too. The process started in May and is to be wrapped up by next month.
DXC Security exec: Yes, I’d have thought we’d spend more on certs and laptop kit for staff, too
The entire US Managed Proxy – save for one engineer who was let go last month to hit financial targets – is to be made redundant on 28 June. But rather than a straight workforce redundancy, this is classified as a workforce migration, we are told.
An impacted DXCer told us the Managed Proxy team were last month given five-and-a-half weeks’ advance notice to help the accounts they manage migrate the design, implementation and support work to a DXC team in India under the control of Biswajeet Rout, who already runs the legacy CSC network, proxy and security team in the country.
One staffer claimed teams are being shunted to India and some are “having to train their replacements who do not have the experience of the staff [being made redundant]”.
We were also told that contractors will be used to cover gaps where full time employees have left the organisation.
El Reg has been told that Mark Hughes, who previously ran BT’s internal tech security and its go-to-market security sales before rocking up at DXC in December, is trying to address changes in the security market involving cloud, AI and automation while also juggling DXC’s desire to reduce the division’s costs by $60m.
Sources told us DXC will try to update skills, concentrate certain one in global delivery centres that will be created in the US and Europe, and house some lower margin, or commoditised security in lower cost areas.
Platform DXC will play a major role in the division to automate service delivery, and patching, for example, is one of the areas to be addressed in this way.
Other cost savings are expected to come from things like vendor consolidation: this means there will be fewer certifications to maintain across the various teams, which is costly and time consuming. A team has been assembled to decide which vendors the firm will stick with.
DXC: Slashing costs affects ability to attract, develop and retain staff? Who’d have thunk it!
In related news, sources have also told us that Dean Clemons, global SC&C services leader at DXC, has quit. Quint Ketting has replaced him on an interim basis until a permanent successor is found.
Clemons has warned his troops of “structural changes” – some middle managers have already gone. As he’d said in a March conference call – which El Reg heard a recording of – DXC is moving to a set-up based on industry verticals rather than being practice-specific.
A DXC spokesman told us:
“The security landscape is changing, and our global clients need different types of services as they progress through their digital transformation. At the same time, security skills are becoming both more specialized and more scarce. We therefore need to look worldwide to fulfill these changing requirements.” ®
We know, there’s lots of privacy news, guidance and documentation to keep up with every day. And we also know, you’re busy doing all the things required of the modern privacy professional. Sure, we distill the latest news and relevant content down in the Daily Dashboard and our weekly regional digests, but sometimes that’s even too much. To help, we offer our top-five most-read stories of the week.
- Privacy Tech: “Deidentification versus anonymity,” by Humu Chief Privacy Officer Lea Kissner
- IAPP Web Conference: “Privacy Engineering Live â€” Is Encrypted Data Personal Under the GDPR?“
- Privacy Perspectives: “A data processing addendum for the CCPA?” byÂ Senior Vice President & General Counsel at IAB, IAB Tech Lab and Trustworthy Accountability Group Michael Hahn and Lowenstein Sandler’s Sundeep Kapur, CIPP/US, and Matt Savare
- Privacy Tracker: “Comparing Maine and Nevada’s new privacy laws with the CCPA,” by Baker McKenzie’s Lothar Determann and Helena Engfeldt, CIPP/E, CIPP/US
- Privacy Tech: “FPF, Israel Tech Policy Institute to launch Privacy Tech Alliance,” by Ryan Chiavetta, CIPP/US
The OpenSSH project has received a patch that prevents private keys from being stolen through hardware vulnerabilities that allow hackers to access restricted memory regions from unprivileged processes. The same approach could be used by other software application to protect their secrets in RAM until the issues are fixed in future generations of SDRAM chips and CPUs.
The patch comes after a team of researchers recently presented an attack dubbed RAMBleed that exploits the design of modern memory modules in to extract information from memory regions allocated to privileged processes and the kernel.
Looking at Canadaâ€™s Personal Information Protection and Electronic Documents Act, the word â€œreasonableâ€� pops up quite often. Companies have an obligation to ensure they always act in a way a reasonable person would consider appropriate in a given circumstance.Â
Almaga Consulting President Gilles Fourchet,Â CIPP/C, CIPT, FIP, said one area where reasonableness should play a role for privacy professionals is conducting privacy impact assessments.
He compared performing a PIA to how encryption was viewed years ago. While encryption may not have been necessary two decades ago, it is now reasonableÂ â€” and expectedÂ â€” for it to be a part of an organizationâ€™s practices. PIAsÂ should be treated the same way.
â€œIf you didnâ€™t do a privacy impact assessment 10 or 15 years ago, you got a slap on the wrist,â€� Fourchet said during a session at the IAPP Canada Privacy Symposium in Toronto recently. â€œNowadays, you get much more than a slap on the wrist.â€�
Organizations would be wise to have a PIA ready to go whenever it may be needed, he added; however, the contents of the PIA are likely going to differ from entity to entity.
If â€œreasonableâ€� is one word Fourchet would attach to PIAs, the other would be â€œsubjective.â€� He said in risk management, what is seen as a vulnerability or a threat to one person may be entirely different from another based onÂ industry or the types of data an entity holds.
A financial institution, for example, has to consider what would happen in the event it suffered a data breach. In its PIA, it would need to assess what would occur if financial information were to be leaked, as well as the incidentâ€™s impact would be on their institutional reputation.
Fourchet said organizations may take a quantitative approach to their determination of risk scores, but they should be cautious. Regulators will want to see the rationale behind their risk scores, which ultimately will tie back to reasonableness, he said.
â€œYou have to justify it. You have to justify Â ‘I think the risk is medium.’ At the end of the day, you might have more question than answers, but unfortunately, there is no mathematicalÂ method for you to enter numbers to get risk volatility â€¦ Itâ€™s not math. Itâ€™s not science. Itâ€™s an assessment,â€�Â he said.
Fourchet recommends organizations be proactive with PIAs.Â It’s easier to have a PIA baked into a program than attemptÂ to fit one into a preexisting process. He added it’s important for privacy professionals crafting the PIA to meet with business owners and stakeholders as they go through establishing the document.
â€œMake sure the business folks keep you guys in mind from the very moment they start to have an idea,â€� Fourchet said. â€œYou have to be seen as people that are adding value. A lot of times the PIA arrives at the very end of the business process. If they include you at the very beginning it would be seamless and perfect.â€�Â
By talking with all the principle parties within an organization, privacy professionals can get a better understanding of business processes and data flows. Those conversations can help avoid legal issues.Â
It is important for privacy professionals to include business owners and senior management as part of the PIA processÂ for accountability. Fourchet said privacy professionals are a messenger, and it’s up to business owners to ensure a PIA is carried out. Accountability cannot be transferred and if an investigation were to take place; it is ultimately the organizationâ€™s problem.Â
â€œYou should act as a consultant. You provide expert opinion when writing PIAs. It is not your job to do it. It is not for you to act upon your recommendations,â€� Fourchet said. â€œYou can advocate for your recommendations, but that is it. You are responsible, but not accountable, for your PIA. At the end of the day, privacy is not the business of your organization. Privacy should be seen as an added value and an advantage.”
Photo by Anna Kobelak
Greetings from Dublin, where summer has arrived at last â€¦ though that may change in a nanosecond!
Itâ€™s a busy time for privacy pros in Irelandâ€™s health care sector at the moment. They are working hard to not just comply with the EU General Data Protection Regulation, but also to roll out a new framework when processing data for health research.
The Irish governmentâ€™s Health Research Regulations 2018 apply in addition to the GDPR and the Data Protection Act 2018, so organizations are now creating processes to ensure that they comply with all regimes, in addition to any other regulations relating to health research and clinical trials (including the Clinical Trials Regulation â€” EU Regulation 536/2014 â€” due to commence in 2020).
In summary, the Health Research Regulations set out â€œsuitable and specific measuresâ€� to be implemented when processing personal data for health research. These measures include a requirement that personal data is not processed in such a way that causes damage or distress to data subjects. Governance structures must be in place, including processes for ethical approval; compliance with the GDPR; and specification of the controller, funders and those with whom the personal data will be shared (even where the data is anonymized or pseudonymized). There is also a requirement to provide data protection training to researchers.
Furthermore, specific processes must be in place for the management and conduct of health research, including DPIAs, data minimization, access controls, and security measures and compliance with the GDPR. An important issue is the requirement to obtain the explicit consent of data subjects. While consent is just one of the lawful bases under which health data can be processed under Articles 6 and 9 of the GDPR, under the Health Research Regulations in Ireland, organizations must obtain the explicit consent of data subjects to process their personal data for the purposes of health research, even where another lawful basis under Articles 6 and 9 of the GDPR exists.
There is an exemption to the requirement to obtain explicit consent under the regulations where organizations apply for a â€œconsent declaration.â€� This involves the governmentâ€™s Consent Declaration Committee assessing the proposed research and finding that explicit consent is not required because the public interest in carrying out the research outweighs the public interest in requiring explicit consent. Utilizing this exemption may prove to be an arduous task, however, due to the extent of the information to be provided with the application and the conditions to be fulfilled in advance of the application. There is also a transition period allowed for research that commenced before 8 Aug. 2018, when organizations must obtain the explicit consent of the data subjects before 7 Aug. 2019 or seek a consent declaration.
The regulations go over and above the GDPR and may result in delays to research projects, certainly at the beginning stages while organizations implement the necessary processes and await consent declarations from the Consent Declaration Committee. At this point, most organizations engaged in health research in Ireland should have assessed their ongoing health research projects and determined whether appropriate levels of consent have been obtained or whether they must make an application for an exemption before the August deadline. DPOs must make sure that they are included in the process also. Thereâ€™s never a dull moment for privacy pros!
He then doubled down on spies’ ‘ghost user’ backdoor plan
Solving the Huawei 5G security problem is a question of convincing the Chinese to embrace British “fair play”, security minister Ben Wallace said yesterday without the slightest hint of irony.
During a Q&A at Chatham House’s Cyber 2019 conference, Wallace said the issue of allowing companies from non-democratic countries access to critical national infrastructure was about getting them to abide by, er, Western norms.
The former Scots Guards officer explained: “I take the view: we’re British, we believe in fair play. If you want access to our networks, infrastructure, economy, you should work within the norms of international law, you should play fair and not take advantage of that market.”
Someone speaking later in the conference, who cannot be named thanks to the famous Chatham House Rule*, later commented: “If we don’t trust them in the core, why should we trust them in the edge?”
Nonetheless, he later expressed regret at Chinese dominance of the 5G technology world, saying: “The big question for us in the West is actually, how did we get so dependent on one or another? Who is going to be driving 6G? How are we, in our society, going to shape the next technology to ensure our principles are embedded in that tech? That’s a question we should ask ourselves: were we asleep at the wheel for the development of 5G in the first place?”
The security minister also doubled down on GCHQ’s controversial and deeply resented proposal to backdoor all encrypted communications by adding themselves as a silent third participant to chats and calls – thus rendering encryption all but useless.
“Under the British government,” he said, “there is an ambition that there is no no-go area for properly warranted access when required. We would like, obviously, where necessary, to have access to the content of communications if that is properly warranted, oversighted, approved by Parliament through the legislation, of course we would. We’re not going to give up on that ambition… there are methods we can use but it just changes our focus. As long as we do it within the law, well warranted and oversighted.”
This contrasts sharply with previous statements by GCHQ offshoot, the National Cyber Security Centre (NCSC), that the government needs a measure of public support before it starts harming vital online protections. At present, Britain’s notoriously lax surveillance laws allow police to hoover up the contents of your online chats and your web browsing history, including precise URLs. This is subject to an ongoing legal challenge led by the Liberty human rights pressure group.
As the minister of state for security and economic crime, Wallace’s wide-ranging brief covers all national security matters, from terrorism to surveillance powers to seeing hackers locked up.
In his keynote address to the conference, Wallace also declared he wants the British public “protected online as well as they are offline” as he gave the audience of high-level government and private sector executives a whistle-stop tour of current UK.gov policy and spending on cybersecurity. One part of that is a push to get better security baked into Internet of Things devices, part of which is the NCSC-sponsored Secure by Design quasi-standard.
The government has also begun prodding police forces to start setting up cyber crime units, with Wallace confirming that “each of the 43 forces [in England and Wales] now have a dedicated cyber crime unit in place”. ®
* The Chatham House Rule states that what is said at a particular meeting or event may be repeated but not attributed.
Speaking at the Gartner Security and Risk Management Summit, former Secretary of the U.S. Department of Homeland Security Michael Chertoff said that data regulation in the U.S. may mirror the EU General Data Protection Regulation in the way of giving users more control of their data. “The focus has to change from ‘hide the data,’ which [isn’t] going to work, to ‘controlling the data,'” Chertoff said of the overall scope for any proposed regulation. He added that tech companies “are starting to acknowledge there should be some regulation about how data is used.” Chertoff also called for a closer look at tech companies bargaining free online services in exchange for user data while proposing that companies should seek to add layers to their cybersecurity systems.
The digital advertising industry is undergoing a rapid regulatory transformation. The EU General Data Protection Regulation went into effect more than a year ago, and the California Consumer Privacy Act is right around the corner. Other jurisdictions are likely to follow. Industry lawyers created legal frameworks to comply with the GDPR but now need to determine what changes are needed to comply with the CCPA and, potentially, future privacy laws in other states. One important part of that assessment is the data processing addendum. In this post for Privacy Perspectives, the Interactive Advertising Bureau’s Michael Hahn, along with Lowenstein Sandler’s Sundeep Kapur, CIPP/US, and Matt Savare, explore whether companies need to amend their existing data processing addenda to comply with the CCPA and if there is a long-term solution to avoid having to draft new addenda every time a jurisdiction adopts a new privacy law.Â
The digital advertising industry is undergoing a rapid regulatory transformation. The EU General Data Protection Regulation went into effect more than a year ago, and the California Consumer Privacy Act is right around the corner with a Jan. 1, 2020, effective date. Other jurisdictions are likely to follow. Industry lawyers created legal frameworks to comply with the GDPR but now need to determine what changes are needed to comply with the CCPA and, potentially, future privacy laws in other states.
One important part of that assessment is the data processing addendum.Â
Emergence of the data processing addendum
Just two years ago, the concept of a data processing addendum did not even exist for many companies. Now, the data processing addendum is engrained in the lexicon of every privacy practitioner throughout the world and forms the contractual foundation upon which data is processed by one party on behalf of another. While companies continue to enter into these addenda for GDPR purposes, there is a growing buzz among industry lawyers about whether a data processing addendum is required or advisable in order to comply with the CCPA. Common questions being posed include:
- Do companies need to amend their existing data processing addenda in order to comply with the CCPA?
- Is there a long-term solution to avoid having to draft new data processing addenda every time a jurisdiction adopts a new privacy law?
GDPR data processing addenda
Article 28 of the GDPR generally requires a written contract between a â€œcontrollerâ€� and â€œprocessorâ€� to govern the processing of personal data (in certain limited instances, a processor may be able to satisfy Article 28 through another â€œlegal actâ€� that is binding upon such processor, but this is not relevant for our discussion here).Â Â
At a high level, the â€œcontrollerâ€� determines what personal data will be processed and for what purposes (i.e., â€œthe purposes and means of processingâ€�), and the â€œprocessorâ€� carries out such processing based on the controllerâ€™s instructions. Their contract must contain certain provisions enumerated within Article 28, including that the processor will (1) comply with the GDPR; and (2) assist with the controllerâ€™s GDPR compliance. These provisions include promises to process personal data only on the documented instructions of the controller, provide â€œadequate security,â€� assist with data subject rights, and give appropriate breach notification, among others. Further, the contract must require the processor to flow down all such obligations to its subprocessors in similar data processing addenda.
These GDPR requirements started a flurry of activity, whereby controllers and processors entered into data processing addenda as riders to their master services agreement (or similar agreement) in order to comply with Article 28.
A CCPA data processing addendum and beyond?
Under the CCPA, compliance obligations attach to three different types of entities: (1) a â€œbusiness;â€� (2) a â€œservice provider;â€� and (3) a â€œthird party.â€�Â Each is a defined term under the CCPA.
Taking a cue from the GDPR, the CCPA defines a â€œbusinessâ€� as a for-profit entity that determines the â€œpurposes and means of the processing of … personal information.â€�
A â€œservice providerâ€� is a for-profit entity that processes this personal information â€œon behalf of a business and to which the business discloses a consumerâ€™s personal information for a business purpose pursuant to a written contract ….â€� This written contract must prohibit the service provider from â€œretaining, using, or disclosing the personal information for any purpose other than for the specific purpose of performing the services specified in the contract for the business.â€�
Finally, a â€œthird partyâ€� is defined in the negative. It is any entity that is not (1) a business that â€œcollectsâ€� personal information from a consumer; or (2) a service provider with the contractual restrictions described above and in this paragraph (or any other â€œpersonâ€� with the same such contractual restrictions). Interestingly, the â€œthird partyâ€� definition adds prohibitions that must be included within the written contract between the business and the service provider or another person in order to not be considered a third party (although it is unclear why).Â
Specifically, such written contracts must also prohibit the service provider or another person from (i) â€œsellingâ€� the personal information (a â€œsaleâ€� is a defined term under the CCPA); and (ii) retaining, using or disclosing the personal information outside of the direct business relationship with the business or for any other purpose than what is specified in the contract. It must also contain a â€œcertificationâ€� by the service provider or another person that it understands all its contractual restrictions and will comply with them.
To sum this up: A business determines the purposes and means of processing; a service provider processes personal information on behalf of a business; and the business and service provider must have a written contract containing certain provisions. Sounds quite similar to the GDPR! However, there are two key differences:
- It is unclear whether an entity must be provided personal information from a business in order to be considered a â€œservice provider.â€� Under the GDPR, processors oftentimes collect personal information directly from customers pursuant to a controllerâ€™s orders. Under a literal reading of the CCPA, a service provider must receive personal information from a business in order to be considered as such. If that is the case â€” which we hope the California attorney general will clarify â€” then it will be more difficult for certain entities, such as analytics providers that collect information directly from a website visitor, to be considered service providers. Instead, they may be considered businesses or third parties.Â
- The CCPAâ€™s written contract requires much less than Article 28 of the GDPR. While Article 28 of the GDPR has a list of provisions that must be included in the contract, the CCPA only prohibits a service provider from using personal information for any purpose outside of the services rendered to the business. There is no requirement, for example, for service providers to flow down their prohibitions to other service providers; however, practically speaking, this may be necessary so that service providers can better comply with their contracts with businesses (i.e., only disclose personal information to provide the services).
In light of the current regulatory uncertainty and the specific requirements contained in the CCPA, relying on vague â€œcompliance with applicable lawsâ€� representations is insufficient and imprudent.
Such amendments to existing or new data processing addenda should state which entity is the â€œservice providerâ€� under the CCPA and that such service provider:
- Receives the personal information from the business pursuant to a â€œbusiness purpose,â€� although it is unclear whether you need to state the specific business purpose.
- Will not â€œsellâ€� the personal information (as the term â€œsellâ€� is defined under the CCPA).
- Will retain, use or disclose such personal information only for the specific purpose of performing the services and within the direct business relationship with the business.
- â€œCertifiesâ€� that it understands its contractual restrictions and shall comply with them.
In addition, the document should address such contentious issues as indemnification, limitation of liability, and what happens in the event of a change in law.
In order to assist companies within the digital advertising ecosystem to comply with the GDPR and the CCPA (and similar state laws in the future), the Interactive Advertising Bureau and the American Association of Advertising Agencies are teaming up with stakeholders from across the ecosystem to draft a model data processing addendum to which parties can voluntarily choose to adhere. This working group will stay active so that the model can evolve as additional jurisdictions adopt new privacy laws or change existing laws. The working group plans to release the model data processing addendum in the third or fourth quarter of this year.
Photo by Jordi Vich Navarro on Unsplash
The Wall Street Journal reports on the financial benefits tech companies have seen in the first year of the EU General Data Protection Regulation. Marketers have spent more advertising dollars with larger tech companies in the GDPR-era as advertisers believe the bigger entities are more likely to remain compliant with the European rules. Established tech platforms also have a direct relationship with consumers, which makes it easier to obtain user consent from a large group of patrons. â€œ[The] GDPR has tended to hand power to the big platforms because they have the ability to collect and process the data,â€� said WPP CEO Mark Read, who added the rules have â€œentrenched the interests of the incumbent, and made it harder for smaller ad-tech companies, who ironically tend to be European.â€� (Registration may be required to access this story.)
NASA’s JPL may be able to reprogram a probe at the arse end of the solar system, but its security practices are a bit crap
Office of the Inspector General brings lab back down to Earth
NASA’s Jet Propulsion Lab still has “multiple IT security control weaknesses” that expose “systems and data to exploitation by cyber criminals”, despite cautions earlier this year.
Following up on a strongly worded letter sent in March warning that NASA as a whole was suffering cybersecurity problems, the NASA Office of the Inspector General (OIG) has now released a detailed report (PDF).
Its findings aren’t great. The JPL’s internal inventory database is “incomplete and inaccurate”, reducing its ability to “monitor, report and respond to security incidents” thanks to “reduced visibility into devices connected to its networks”.
Houston, we’ve had a problem: NASA fears internal server hacked, staff personal info swiped by miscreants
One sysadmin told inspectors he maintained his own parallel spreadsheet alongside the agency’s official IT Tech Security Database system “because the database’s updating function sometimes does not work”.
An April 2018 cyberattack exploited precisely this weakness when an unauthorised Raspberry Pi was targeted by an external attacker.
A key network gateway between the JPL and a shared IT environment used by partner agencies “had not been properly segmented to limit users only to those systems and applications for which they had approved access”. On top of that, even when JPL staff opened tickets with the security helpdesk, some were taking up to six months to be resolved – potentially leaving in place “outdated compensating security controls that expose the JPL network to exploitation by cyberattacks”.
No fewer than 666 tickets with the maximum severity score of 10 were open at the time of the visit, the report revealed. More than 5,000 in total were open.
Indeed, such a cyberattack struck the whole of NASA back in December. Sensitive personal details of staff who worked for the American space agency between 2006 and 2018 were exfiltrated from the programme’s servers – and it took NASA two months to tell the affected people.
Even worse, the JPL doesn’t have an active threat-hunting process, despite its obvious attractiveness to state-level adversaries, and its incident response drills “deviate from NASA and recommended industry practices”. The JPL itself appears to operate as a silo within NASA, with the OIG stating: “NASA officials [did not] have access to JPL’s incident management system.”
Perhaps this report will be the wakeup call that NASA in general, and the JPL in particular, needs to tighten up its act. ®
Privacy browser reckons personalised advertising = personal data processing
Lawyers for the privacy-focused Brave browser have written to the UK’s Information Commissioner’s Office (ICO) with what they claim is evidence that Google’s online ad-selling policies break the EU’s General Data Protection Regulation (GDPR) – namely Article 5(1)(f).
Brave kicked off this fight back in September last year. At the heart of their battle is a claim that “personalised advertising” by Google counts as personal data processing. Broadly, they say Mountain View’s adtech empire is too vast, sprawling and automated to be fully compliant with the law.
Article 5(1)(f) of the GDPR states that personal data must be “processed in a manner that ensures appropriate security… including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures”.
In yesterday’s letter, Brave’s lawyers urged the ICO to join its fellow data cops in Ireland with their investigation into Google. They also want the ICO to widen its own enquiries to include 2,000 Google Authorised Buyers, whom it named in a spreadsheet forwarded to the data protection bods, along with strongly worded pleas to start an investigation.
Interestingly, Brave’s Johnny Ryan highlighted a report produced by US adtech critics DCN, which he said proved that online news outlets (i.e. the people who are most voluble about the damage done to their industries by Google and Facebook’s online ad duopoly) would benefit from an EU ban on personal data being used for ad targeting.
In response to all this, an ICO spokesperson told us: “The data protection implications of adtech are of interest to the ICO. We are currently concentrating on the ecosystem of programmatic advertising and real-time bidding (RTB). This aligns with our Technology Strategy, where both online tracking and artificial intelligence are highlighted as priority areas.
“We have been engaging with representatives of the adtech industry and recently hosted an event to discuss the data protection implications of current and future industry practices.” ®
Tick, tick, boom?
Column Last year I bought one of those nifty new fitness tracker wristwatches. It counts my steps and gives a me bit of a thrilling buzz on when I’ve reached my daily goal. A small thing, but it means a lot.
This means I’m always under surveillance – in the best possible sense, my fitness tracker has its eye on me, continuously monitoring my motion, inertia, acceleration and velocity. It computes the necessary maths to turn those into steps and (kilo)calories. It keeps an extensive database of my activities, moment to moment.
Put like that, it sounds a bit suspicious. After all, why would anyone or anything need to keep such a close eye on anyone? But if I want to keep myself moving – and motivated – it makes sense to open up my private world, strap a sensor on, and let it listen.
This is a delicate point because our sensors don’t always let us know when they’re listening – something that has come back to bite Amazon among others. But the bigger question, inevitably, comes down to what happens with that data once it’s gathered? Where does it go? How does it get used, and for the benefit of whom?
My fitness tracker is just smart enough to create a data trail, but not quite smart enough to go rogue with the data it gathers. It downloads to an app, and from there I can control its distribution to the world – or so I choose to believe.
But there are far too many other points in this world where data is gathered, invisibly and unacknowledged. That data – even though we generate it – does not belong to us.
I wonder how I’d feel if my fitness tracker fed all my stats to someone else – someone I wouldn’t ever know – and never told me anything. I’d probably wonder why I bothered to wear it, but I’d also worry about how that data might be used. Against me.
Suppose if my fitness tracker issued a soft buzz every time I passed a cafe, and told me I’d earned a nice cake? Within a month I’d gain twenty kilos, led down the garden path by a device that had gathered enough intimate details about me to know just the right way to nudge me away from my better interests.
As organisations gather huge stockpiles of data, they seem to grow increasingly tightfisted with their data and insights. They’ve found a gold mine – why share? The problem with this line of reasoning is that it quickly dead-ends in a world where the only conceivable use of data is as zero-sum competitive advantage: “I know something you don’t.”
If a quarter-century of the web has taught us anything, it’s that “a resource shared is a resource squared”. Your data may be nice, my data may be better – but it’s only when we work together that we can make something truly worthwhile.
The standout organisations of the middle 21st century build value chains for data – paralleling the material value chains that drove the last century. This new age of “data welfare” sees data resources married, multiplied, shared and amplified.
I’m looking forward to a day when my fitness tracker talks to both my GP and my grocer [how about your health insurer? – Ed] so I can keep my health and my diet aligned with my activities. In a world where we’re all in this together, building bridges with data – not walls. Let the the dog-eat-dogs of data warfare sleep. ®
Privacy engineering is the evolving practice of incorporating data privacy into product and service development and building data governance controls into systems that process personal information. In response to this emerging trend, the IAPP has created a new web conference series to highlight workable solutions and innovative people and ideas. Join the IAPP July 9 for the second installment of Privacy Engineering Live in which Fieldfisher Privacy and Security Information Partner Phil Lee, CIPP/E, CIPM, FIP, Enterprivacy Consulting Group Principal Consultant R. Jason Cronk, CIPP/US, CIPM, CIPT, FIP, and VeraSafe Privacy Counsel Josh Gresham, CIPP/E, discuss a question privacy professionals agree to disagree on: Is encrypted data considered personal under the EU General Data Protection Regulation? Submit any questions and comments ahead of the program toÂ [email protected].
In this week’s Privacy Tracker global legislative roundup, take a look at a new IAPP white paper that helps privacy professionals as they work to operationalize their California Consumer Privacy Act compliance programs. A piece for The Privacy AdvisorÂ looks at the potential future of the ePrivacy Regulation as Finland is set to start its six-month EU presidency. The Office of the Privacy Commissioner of Canada announced it has revised the framework for its transborder data flow consultation, and the Spanish Data Protection Agency, the AEPD, fined soccer league La Liga 250,000 euros for alleged violations of the EU General Data Protection Regulation.
The European Union Aviation Safety Agency published a new set of rules on the use of drones.
Open Rights Group found the U.K.’s Age-Verification Certificate Standard for pornographic content does not properly meet the necessary standards for cybersecurity and data protection.
The New York Times released a profile on Timothy Carpenter, the man at the center of the landmark Carpenter v. United StatesÂ privacy case.
The 9th U.S. Circuit Court of Appeals in San Francisco reinstated a class-action lawsuit against Facebook for its alleged use of an auto-dialing system to send robocalls.
The Denton Record-Chronicle reports on the opposition to a proposed privacy bill in Texas.
The Washington Post reports on the struggles to enforce the Children’sÂ Online Privacy Protection Act and the impact it has had on the online ecosystem.
In this piece for Privacy Tracker, Mattos Filho Privacy, Data Protection and Technology Associate Alan Thomaz, CIPP/E, CIPM, FIP, and Data Protection and Tech Regulation Law Professor and Lawyer Thiago LuÃs Sombra, CIPP/E, write about the provision approved byÂ Brazil‘s Congress that allows for the creation of a data protection authority for the country and amends the LGPD.
In this piece for The Privacy Advisor, David Thomas reports on the potential future for the ePrivacy Regulation asÂ FinlandÂ is set to start its six-month EU presidency.
At the U.S. Senate Committee on Banking, Housing and Urban Affairs hearing, lawmakers sought answers on how the data broker industry operates, has evolved over time and technological progress, and what Congress should do about it. IAPP Editor Angelique Carson, CIPP/US, was at the hearing and has the details in this piece for The Privacy Advisor.
In this IAPP white paper, Baker McKenzie Partner Lothar Determann and IAPP Westin Fellow Mitchell Noordyke, CIPP/E, CIPP/US, CIPM,Â outline how businesses must develop a perspective on the definition of “account” as they work to operationalize their California Consumer Privacy Act compliance programs with respect to data access requests.
Steve Wright of Privacy Culture has published a white paper, housed in the IAPP Resource Center, that offers an overview of the GDPR Maturity Framework that aids a DPO, as well as the organization they represent.
The Office of the Privacy Commissioner of Canada has announced a revision to the framework of its consultation on transfers for processing and transborder data flows.
An August 2018 federal directive states Canadian surveillance agencies can collect and share citizensâ€™ data as part of legitimate investigations.
The Austrian Supreme Court has ruled Facebook is not permitted to take further action to block a model suit regarding its fundamental privacy issues.
Spain‘s Data Protection Agency, the AEPD, has fined soccer league La Liga 250,000 euros for alleged violations of the EU General Data Protection Regulation.
The European Data Protection Board published updated versions of its â€œGuidelines 4/2018 on the accreditation of certification bodies under Article 43 of the General Data Protection Regulationâ€� and â€œGuidelines 1/2018 on certification and identifying certification criteria in accordance with Articles 42 and 43 of the Regulation.â€�
California Senate Democrats voted to ban the use of facial-recognition technology in body cameras used by law enforcement.
The issue of whether police can compel someone to provide their passcode or biometric data to unlock their phone has been raised in Florida.
Gov. Larry Hogan, R-Md., signed a bill to amend Maryland‘s data breach notification rules.
Foley Hoag Security, Privacy and the Law Blog takes a look at key provisions within the New York Privacy Act.
A bill banning the use of facial-recognition technology in New York schools for a year has advanced to the state Assemblyâ€™s Standing Committee on Ways and Means.
The Vermont Supreme Court has ruled that a patient can pursue legal action against a hospital and one of its employees for privacy violations.
A lawsuit seeking class-action status in Washington state is alleging that the nonconsensual collection of children’s voice recordings by Amazon’s Alexa violates laws in at least eight states.
The Financial Times reports work on a federal U.S. privacy law has slowed down on Capitol Hill as lawmakers continue to have partisan disagreements over the potential rules.
Bloomberg Businessweek reports efforts to create U.S. privacy legislation, at a state and federal level, are not progressing as quickly as lawmakers had hoped.
A coalition of state attorneys general has asked the U.S. Federal Trade Commission for stricter reporting requirements for tech acquisitions and new rules for data brokers.
Sen. Amy Klobuchar, D-Minn., introduced theÂ Protecting Personal Health Data Act, which would create protections for genetic, biometric and personal health data.
A coalition of public interest groups alleges the four major wireless providers in the U.S. violated privacy laws by sharing consumers’ location data without consent.
Happy Flag Day from Portsmouth, New Hampshire!
We’ve spent a lot of time in recent months discussing state and federal privacy law in this country … and with good reason! There’s a lot going on and a ton to keep up with, but I want to focus today on a few developments this week involving transborder data flows. As a side note, it’s hard to believe it was less than three years ago that we were collectively focused on the fate of the EU-U.S. Privacy Shield agreement. Post-Obama presidency, the GDPR and CCPA, it seems like a distant memory.Â
Earlier today, the Federal Trade Commission today announced it is taking action against several companies that falsely claimed compliance with Shield and other international privacy agreements. Specifically, it reached a settlement with a background-screening company over false claims and sent warning letters to 13 other companies for allegedly making similar false claims.Â
Shield, along with standard contractual clauses, will also go on trial in the EU this July. Just in time for our Fourth of July celebrations here in the States, the General Court of the European Union (an arm of the Court of Justice of the EU) will hear a case against Shield. Three French-based organizations brought the case, which runs on similar logic to the so-called “Shrems I” case that eventually brought down the EU-U.S. Safe Harbor framework. The new complaint alleges the U.S. government is still permitted to conduct mass surveillance on EU citizens, though, according to a Hogan Lovells blog post, the organizations agree that the Shield is “a less vague scheme, and, implicitly, that it is more protective than Safe Harbor was …” The other case involves SCCs. The now-famous “Schrems II” case, which was referred to the CJEU by the Irish High Court, includes 11 questions about SCCs and whether they offer an adequate level of protection for data transferred to the U.S. from the EU. Huge consequences here in both cases.Â
On more positive transborder data flow news, the IAPP’s Joe Duball recently caught up with a representative from the U.S. Department of Commerceâ€™s International Trade Administration to discuss the naming of a new accountability agent in the Asia-Pacific Economic Cooperation’s Cross-Border Privacy Rules program. As the second U.S.-based agent, Schellman now joins TrustArc subsidiary TRUSTe. Schellman Principal Debbie Zaller, CIPP/US, told Joe that “It fits right in with our other services.” She added that “it creates a lot of opportunity for all.” Along with TrustArc, including more accountability agents may help get “the word out in the world about APEC and what the certification can do for organizations.” This may well be a positive step for getting more CBPR adoption.Â
We know, there’s lots of privacy news, guidance and documentation to keep up with every day. And we also know, you’re busy doing all the things required of the modern privacy professional. Sure, we distill the latest news and relevant content down in the Daily Dashboard and our weekly regional digests, but sometimes that’s even too much. To help, we offer our top-five most-read stories of the week.
- IAPP web conference: “Operationalize Data Privacy: How to Leverage Automation, Artificial Intelligence and Machine Learning to Manage Privacy Across Thousands of Resources“Â
- The Daily Dashboard: “Maine gov. signs new privacy bill into law“
- The Daily Dashboard: “EDPB publishes updated GDPR guidelines“
- IAPP Resource Center: “White Paper â€” CCPA Compliance Operation: Delivering Data Access via Accounts,” by Lothar Determann, partner Baker McKenzie, and Mitchell Noordyke, CIPP/E, CIPP/US, CIPM, IAPP Westin Fellow
- Privacy Tech: “A push for meaningful corporate data research ethics reviews is underway,” by Kate Kaye, Redtail Media
Every year around this time, we collectively decide to open the windows, brush off the dust, and kick the spring season off on a clean foot. But as you are checking off your cleaning to-dos, be sure to add your social media profiles to that list. It’s obvious that social media profiles hold sensitive personal data but letting that information and unknown followers pile up can put your company, customers and employees at risk.
We live in a world where data privacy is top of mind, and in fact, this spring season marks the one-year anniversary of GDPR. Since the law went into effect, we have seen numerous cases of high profile data breaches making headlines. Now more than ever, businesses have an obligation to not only comply with data privacy laws but go above and beyond to secure proprietary, sensitive, and consumer data.
So, what can you do to protect your business, customers, and employees from data breaches and information leakage? Here are three tips for cleaning and securing your online data this spring.
#1: Clean what’s yours
You wouldn’t just clean your bedroom and leave the bathroom a mess, would you? Of course not. So, when managing your data, you first need to understand what online assets you own. Whether corporate or personal, start by taking stock of your owned social media accounts, domains, e-commerce sites, and any other digital channels where you or your company has a presence. Not only should you identify what accounts you own, but it’s necessary to review the privacy settings on those accounts. What are you sharing? Who can see your posts? Your locations? Your contact information?
One of the most overlooked ways of protecting your owned accounts is through strong passwords. You should have a unique password for each of your social media accounts, and for all accounts for that matter. The passwords should have a variety of cases, letters and symbols, and be hard to decipher. Be sure to avoid names, soccer players, musicians and fictional characters – according to the U.K. government’s National Cyber Security Center, these are some of the worst, most hackable passwords.
#2: Clean on behalf of your customers
For corporate channels, keeping owned accounts secure protects your brand’s reputation against impersonators, offensive content and spam. What’s more, it also protects your followers – which includes customers – from being exposed to that malicious content. As customers are more frequently using social media channels to engage with brands before making a purchase or obtaining a service, companies must prioritize retaining trust and loyalty among their customers.
To do so, your organization needs to, let’s say, “polish the windows” and be fully transparent with how the company will use their personal data. And with more state laws replicating the precedent set by GDPR, this visibility will not only be a best practice, but a law.
In addition, you should invest in the identification and remediation of targeted attacks and scams on your customers. This will not only help you gain their trust, but also provide them with ample protection. Finding and removing customer scams – i.e. malware links to social accounts impersonating your customer support team – will keep you and your valued customers safe online.
#3: Empower your employees to clean
Easy-to-use tools like Amplify by Hootsuite have turned employees into companies’ greatest brand ambassadors, particularly on social media. This type of promotion is invaluable to marketing teams, but whether on corporate or personal channels, employee use of social media must be addressed by security and marketing teams alike.
This spring, empower your employees to own their own social media cleanliness. By establishing and providing comprehensive education and training programs for your employees empowers them to learn the latest when it comes to corporate online policies and also social media security best practices. Traditionally, we find that companies have invested in trainings focused on email or insider threat risks but have neglected social and digital channels.
Don’t wait until next spring to clean again
Although it is best to incorporate social media security best practices into our everyday, this spring season make it a point to do a deep dive into your personal and professional social media profiles. Your brand, employees and customers will thank you, and your profiles will have a fresh glow after a long winter.
About the author: David Stuart is a senior director at ZeroFOX with over 12 years of security experiences.
Copyright 2010 Respective Author at Infosec Island
Many companies use artificial intelligence (AI) solutions to combat cyber-attacks. But, how effective are these solutions in this day and age? As of 2019, AI isn’t the magic solution that will remove all cyber threats—as many believe it to be.
Companies working to implement AI algorithms to automate threat detection are on the right track; however, it’s important to also understand that AI and automation are two entirely separate things.
Automation is a rule-based concept. You may have heard it referred to as machine learning. AI, on the other hand, involves software that is trained to learn and adapt based on the data it receives. The fact that software is capable of adapting to changes, especially in a rapidly evolving cyber threat landscape, is very promising. It’s also important to note that AI is still at a very immature stage of its development.
The promise of AI bringing cognition to the realm of software has been exciting tech enthusiasts for years. The fact remains however that it is still software. And we should all know by now that software (particularly web-based software) is vulnerable.
As AI does mature over the next few years, we can expect to see a great deal of AI-enabled automation solutions. This is especially true with regard to day-to-day routine provision tasks and particularly around SOC operations.
We must not forget that AI technologies are also a double-edged sword as not only defenders have access to such capabilities. Attackers who also possess such skills can tip the balance. Thus, with the commoditization of AI, we can expect to see more incidents like the infamous Google speech recognition API that was used to bypass Google’s own reCaptcha mechanism.
Examples such as this lead us to remember that software is only as good as the developers who designed and wrote it. After all, data science is bound by the data that is fed to the algorithms. For critical applications such as those used for medical, law enforcement, and border control purposes, we need to be aware of such pitfalls and actively filter human bias from these systems.
As IT leaders and CIOs build out their AI strategies, software security is a key consideration. Software security is always an important part of any product, whether it is in the development stage or in production, or whether it’s purchased from a vendor—AI is no exception.
When considering the possible applications (health, automotive, robotics, etc.) of AI, the importance of software security for the development of AI applications is at a really high level. It should be of high concern throughout the application’s lifecycle. And with all products brought in from third parties, security must be thoroughly vetted before being implemented.
Imagine if someone were able to take control of your AI device or software and feed it false answers. Or, picture this: an attacker who is able to control the input information that your AI needs to process—the input information that the AI will act on. For example, an attacker who is able to control the sensorics input of the surroundings of a car. Giving wrong information as input would lead to wrong decisions, which can potentially endanger lives. For this reason, the development and usage of AI must be absolutely secure.
Technologies such as interactive application security testing (IAST) allow software developers (including those developing web-based AI applications) to perform security testing during functional testing. IAST solutions help organizations to identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing methods. Through software instrumentation, IAST monitors applications to detect vulnerabilities. This technology is highly compatible with the future of AI.
As with all technology, the question comes down to how we apply it in practice. It’s a positive attribute that the industry is concerned about how AI can impact our lives. This should push developers and security teams to be more cautious and to find and implement mechanisms that will help us to avoid catastrophes relating to AI’s decisions and actions. In the end, AI will help us to improve our lives. We, in turn, must ensure that the software doing so is secure.
About the author: Boris Cipot is a senior security engineer at Synopsys. He helps companies of all shapes and sizes to create secure software. Boris joined Synopsys when Black Duck Software was acquired in 2017. He specializes in open source software security, robotics, and artificial intelligence.
Copyright 2010 Respective Author at Infosec Island
Get thee down to the pub, fix out over the weekend maybe
Docker botherer Quay.io’s webhook integration with Bitbucket is looking a bit green around the gills.
It followed that up with a warning that by the end of April 2019 it would be making some wholesale changes to Bitbucket user objects, among others, to hand over a bit more control of what data is available to whom.
To quote an anonymous Register reader: “It appears Quay.io didn’t get Atlassian’s memo.”
He went on to tell us: “I’ve been getting attacked left, right and center by developers since yesterday afternoon.” And an enraged developer can be a fearsome thing.
The problem means that one of Quay.io’s party tricks, automated builds of containers, is a no-no for Bitbucket users using webhooks to link the platforms.
The idea of Quay.io’s service is “to automate your container builds, with integration to GitHub, Bitbucket, and more”. Sure, but only if you keep track of API changes.
At the time of publication, the status page for Quay.io notes the borkage (referred to a “Partial Outage”) as: “Due to a recent change in Bitbucket’s API, Bitbucket triggers are currently non-operative. We are working on a fix to address this change from Bitbucket.”
Which seems a little harsh since Atlassian has hardly concealed its privacy plans. A hardworking support operative at Quay.io confirmed the problem was indeed that pesky API tweak, but said the company’s developers were working to get a fix out over the weekend.
Quay.io is the hosted incarnation of Red Hat’s on-premises container registry service and came as part of the firm’s acquisition of CoreOS at the beginning of 2018. CoreOS purchased Quay back in 2014. A popular pricing option for the hosted service is $60/month for 20 private repos, although solitary devs can score the service for $15/month for five private repositories.
So long as they don’t want any of that automation nonsense with Bitbucket, of course. Until Quay.io deals with the problem, builds will need to be kicked off via a manual upload or some custom git integration. ®
The implementation of the EU General Data Protection Regulation has come with a trickle-down effect on all parties involved. However, data protection officers endured more change and increased responsibilities than anyone, as their work was moved into the spotlight.Â Steve Wright of Privacy Culture has published a white paper, housed in the IAPP Resource Center, that offers an overview of the GDPR Maturity Framework that aids a DPO, as well as the organization they represent.