Why the CCPA’s ‘verified consumer request’ is a business risk
Sometimes it seems like all authenticators are compromised. Passwords, identity documents and even knowledge-based authentication â€” a plethora of these and other authenticators are readily available on the web or the dark web.
The terrible beauty of the California Consumer Privacy Act is that innumerable companies will soon be required to undertake totally novel consumer-facing responsibilities. In the name of empowering consumers, the law is actually introducing threat vectors that can be manipulated by fraudsters. This presents a considerable risk to organizations by enabling a data breach while ostensibly trying to comply with the law and support a consumerâ€™s data access request.Â Â Â Â Â Â Â Â Â Â Â Â Â
There is a simple and innocuous-sounding CCPA requirement stating that requests for access and deletion must be â€œverified.â€� However, the law does not clarify what qualifies as verified.
So, similar to what weâ€™ve seen for the EU General Data Protection Regulation, companies will be taking on a range of low-tech solutions to satisfy the verification requirement. Current concerns center on responding to requestors invoking their privacy rights â€” without a serious contemplation of what it is to respond to a right to access or delete to an imposter. Those who have been in this space for a while will recall the 2004 ChoicePoint breach in which the data aggregation company inadequately screened its customers such that identity thieves were able to set up fake businesses as a way to buy personal information.
The terrible beauty of the California Consumer Privacy Act is that innumerable companies will soon be required to undertake totally novel consumer-facing responsibilities.
The GDPR contains a similar requirement and so presents a learning opportunity for those seeking compliance with the CCPA. Researchers James Pavur and Casey Knerr recently found just how easy it is to take advantage of the European right to a data subject access request. They conducted an exploratory social engineering experiment (“GDPArrrrr: Using Privacy Laws to Steal Identities“), finding that some companies acted upon receipt of the most basic, easily forged or obtainable documents â€” even a postmarked envelope or heavily redacted photocopy of a bank statement.
The current compliance responses are manual, naive and easily manipulable. Pavur and Knerr believe the fact that online identity verification is fraught with fraud.
As privacy pros, we know that mounds of personal information, including document images themselves, have been compromised by hackers and are available in dark web marketplaces. This is exactly why regulated industries, such as financial services, spend billions on robust identity verification tools and services. Where identity theft has traditionally been monetized into fraudulent accounts, target companies have heightened awareness of how identity data can be manipulated. They have invested in anti-fraud, know-your-customer and anti-money-laundering compliance programs, so why would an e-commerce or retail outfit naively think that they can sufficiently identify CCPA requestors? Pavur and Knerr saw this in a sectoral pattern of responses to their experiment, with larger and/or regulated entities being firmer with their identity verification requirements.
Now we hear echoes of ChoicePoint because fulfilling a fraudulent request for access equates to giving a third-party unauthorized access to personal information. A consumer file that is released to the wrong party can be misused for tax, insurance and other financial frauds: spear phishing, stalking and other crimes. It could even be the basis for the CCPA right of private action.Â
Fraudulent requests for deletion can affect the value of a companyâ€™s data holdings and its analytics operations. As presciently noted by Pavur and Knerr, â€œWe would expect future work which replicates a more sophisticated attacker â€” either via technical means (such as email account hijacking or spoofing) or via physical means (such as passport forgery) â€” would likely have substantially higher success rates than this baseline threat model.”
The mandated right to access can be a boon to consumers, but manipulated on a broad scale, it can also be a boon to malfeasance.
Imagine getting consumer files across every retail and services sector out there by flooding companies with massive automated requests for access to personal information. Just like spam, there will be companies that respond if the requests have the veneer of legitimacy. It is a new door for improper data access â€” not a back door, but an actual, legit front door â€” for fraudsters to obtain all manner of valuable personal information.
The emphasis now is to bend over backward to help consumers to invoke their new rights, but if this is not done well, consumers will ultimately be hurt by fraudsters tampering with their data using the consumer request mechanism.
Companies need to be vigilant as they set up their consumer response processes. This â€œverified consumerâ€� part is no small thing. It requires a robust commitment to accurately sourcing your verification data, skill in identifying dubious requests, and some healthy skepticism wouldnâ€™t hurt. The emphasis now is to bend over backward to help consumers to invoke their new rights, but if this is not done well, consumers will ultimately be hurt by fraudsters tampering with their data using the consumer request mechanism.
Itâ€™s ironic that this next-gen data breach could arise out of well-meaning efforts to comply with a new privacy law. But thatâ€™s the kind of big data world we live in. A gap in expertise of this breadth â€” fraudsters will find a way to take advantage of this gap. With awareness and commitment, organizations will be able to dedicate resources to address such requests properly. Concurrently, perhaps this will be a topic of guidance from the California attorney generalâ€™s office.
Until then, as a sage friend put it, â€œWhy do more work (to acquire PII) when you can just ask?â€�