The “Right to be Forgotten” has sailed across the Atlantic. From its origin in a 2014 CJEU case involving Google and subsequent codification in the GDPR, it has now entered the U.S. data privacy landscape in the form of the CCPA’s “Right to Deletion.”
While there are important differences between the two, both essentially allow a data subject (or “consumer,” in the CCPA’s terminology) to ask a data controller (or “business,” in CCPA language) to delete personal information it has collected about that individual.
This development has understandably been the subject of some fascinating philosophical and political discussions, but I am more interested in a basic, practical question: in short, how can companies actually carry out these requests?
Back in December, I had the opportunity to grapple with this question as part of a webinar panel. My co-panelists (Gavin Reinke of Alston & Bird and Sagi Leiserov of Dataguise) and I placed this question in the larger context of individual rights that are codified in the GDPR and are taking firmer shape in the U.S., at least at the state level. Along with the “Right to Erasure/Deletion,” we also addressed the “Right of Access” and the “Right of Data Portability.’
As Sagi, who has been helping large organizations implement privacy controls effectively since 2017, and has been working in the privacy field since 1998, helpfully frames the issue:
“ […] how do we create an environment within our organization that allows us to […] not just write clever policies that accurately describe the requirements but in fact be able to meet those policies?”
Sagi suggests that organizations should focus on four deceptively simple tasks:
- Understand the data you keep. Quite simply, if you don’t know what you have and where it is, you cannot protect it. Many organizations have difficulty accurately identifying where personal information is located and which data elements are associated with this information. This difficulty stems in large part from three common organizational shortcomings:
- Poor tagging of data elements. Databases might be tagged in a way that makes it difficult to determine which data elements are involved. For example, suppose a table column only contains three numbers (0, 1, and 2), and these are meant to represent a data subject’s gender. So far, so good. But if the header only makes sense to the person who created it, then even the best AI will not be able to figure out what the numbers represent.
- Sensitive combinations. Poor data management can also lead to a failure to notice when personal information enters the realm of “sensitive” data and becomes subject to more onerous restrictions. For instance, suppose an organization maintains a database that classifies its employees and their spouses by gender and marital status. Taken separately, these data elements do not present a particular challenge. But when “gender” is combined with “name” and “marital status,” it becomes possible to determine a person’s “sexual orientation,” which falls under GDPR Art. 9 (“Special Categories of Personal Data”). Accordingly, organizations should be able to scan their systems to identify and flag these “sensitive combinations.”
- Inferences drawn from personal information. Under the CCPA, “inferences” drawn from personal information comprise a separate category of personal information, which is subject to its own disclosure obligations and data access rights. From a compliance standpoint, companies need to think about how they can make these inferences and structure them within their larger data management systems.
- Know the identities in your system. Who are the people behind the data (i.e, the “data subjects”) in a company’s system? If a company cannot link personal information to specific individuals, it will be impossible to adequately comply with requests for access, erasure, or portability.
- Align third parties with the identities they process. When responding to a request for erasure, a company must ensure that any third party with whom it has shared personal information about the data subject must also erase that data. Of course, it cannot do this if does not have an effective and reliable way to link these data subjects with third parties.
- Delete personal information. Deleting personal information to comply with a request for erasure might have unintended, and negative, consequences due to the nature of interlinked systems. For instance, if a consumer transaction is deleted from one database, it might break down a financial reporting operation in two or three other systems. One solution is to use “masking” and ”anonymization” so that companies can meet the erasure requirement while avoiding problems with data quality and integrity.
The bottom line is that cutting-edge individual data privacy rights will mean very little if companies do not figure out practical, effective ways to comply with data subject requests.