Skip to main content
CCPA (Bill C-11)PRIVACYPrivacy Law Reform

Bill C-11 was the gift that needed returning

By November 16, 2021June 16th, 2022No Comments

Sometimes a long-awaited gift that arrives wrapped in sparkly paper disappoints when it is opened. That is the case with Bill C-11, introduced in November 2020, which included the Consumer Privacy Protection Act (CPPA).

Pitched as an implementation of the privacy promises made in the Digital Charter, and as an answer to a decade’s worth of calls for meaningful private sector privacy law reform, at first glance the Bill contains a number of features privacy advocates want: bringing de-identified data into the scope of the law, addressing transparency for automated decisions, adding provisions for data portability, and making big improvements to the enforcement powers of the Office of the Privacy Commissioner of Canada (OPC).

But that old cliché, the devil is in the details, is never truer than when applied to a 122-page piece of legislation. A dive into the particularities of those provisions reveals their shortcomings. Consider C-11’s treatment of de-identified data, which done right and well-regulated, can be privacy-protective and security-enhancing. De-identified data is processed so that it cannot be used to identify an individual. It is undeniably a process that runs on personal information, as generated by personal transactions, and behaviour—but if processed properly, it shouldn’t be linkable to an identifiable individual. However, reams of research confirm de-identification will never be foolproof in our big data age. And such data is increasingly being used to make consequential decisions about individuals and groups. So, given the risks and impacts, having de-identified data brought unambiguously into the scope of the law would be a privacy win. Not to prohibit its use, but to ensure the processing and use of such data happens within a framework of accountability and transparency, with consequences for harms if they arise and firm prohibitions on re-identification.

C-11 however, is not unambiguous. It allows data to be de-identified without consent for three purposes—for internal research, for a prospective business transaction, or when disclosed for a prescribed socially beneficial purpose. Does that mean de-identified data can only be used without consent for those purposes? Would consent be required for other purposes? Or are other purposes disallowed?

Given room enough and time, we could go through each of the other “first glance” wins in the Bill. And then we could turn to all the outright losses—not least, a failed attempt to address the growing inadequacy of a purely consent-based regime through the addition of a range of exceptions for “business operations” which, unlike similar provisions in Europe’s General Data Protection Regulation, are not constrained by the recognition of privacy as a human right. Exceptions in cases where “obtaining the individual’s consent would be impracticable because the organisation does not have a direct relationship with the individual” brings to mind the facial recognition purveyor Clearview AI, the company that non-consensually scooped billions of images from the internet and was the subject of scathing findings by the OPC, labelling it an enabler of mass surveillance.

In the end the true test is this: if C-11 had become law, would privacy protections be stronger for people across Canada? Would the power imbalance between corporate data collectors and individuals be mitigated? More concretely, would the recent scandals involving personal information—the Facebook/Cambridge Analytica case attempting to undermine democratic elections, the provision of Clearview AI facial recognition tools to police across the country, the non-consensual use of facial analytics by Cadillac Fairview—have been prevented, or, at least, would those responsible more effectively been required to face consequences? The sad answer is, no. Not only would it have been easier to claim exceptions to consent, but the Privacy Commissioner notes consent violations would be out of scope for administrative penalties.

C-11 is inadequate to address the reality that 21st century business models cast us as not just consumers but as the consumed. Stronger protections are required to ensure that data-driven innovation respects privacy rights, not least because of the downstream impacts on associated rights, including equality. Canada lags in technology adoption, in part because social trust in innovative data-enabled technologies is currently at low ebb. Better privacy laws, better protection, better trust, better business. The good news is, that present that only looked good on the outside was returned to sender when Parliament was dissolved. Let’s hope that its creators learn from its reception and that its replacement is worth keeping.

[This piece was published by The Hill Times on October 27, 2021 as an opinion, Sometimes a long-awaited gift that arrives wrapped in sparkly paper disappoints when it is opened. That is the case with Bill C-11, introduced in November 2020, which included the Consumer Privacy Protection Act (CPPA).

Pitched as an implementation of the privacy promises made in the Digital Charter, and as an answer to a decade’s worth of calls for meaningful private sector privacy law reform, at first glance the Bill contains a number of features privacy advocates want: bringing de-identified data into the scope of the law, addressing transparency for automated decisions, adding provisions for data portability, and making big improvements to the enforcement powers of the Office of the Privacy Commissioner of Canada (OPC).

But that old cliché, the devil is in the details, is never truer than when applied to a 122-page piece of legislation. A dive into the particularities of those provisions reveals their shortcomings. Consider C-11’s treatment of de-identified data, which done right and well-regulated, can be privacy-protective and security-enhancing. De-identified data is processed so that it cannot be used to identify an individual. It is undeniably a process that runs on personal information, as generated by personal transactions, and behaviour—but if processed properly, it shouldn’t be linkable to an identifiable individual. However, reams of research confirm de-identification will never be foolproof in our big data age. And such data is increasingly being used to make consequential decisions about individuals and groups. So, given the risks and impacts, having de-identified data brought unambiguously into the scope of the law would be a privacy win. Not to prohibit its use, but to ensure the processing and use of such data happens within a framework of accountability and transparency, with consequences for harms if they arise and firm prohibitions on re-identification.

C-11 however, is not unambiguous. It allows data to be de-identified without consent for three purposes—for internal research, for a prospective business transaction, or when disclosed for a prescribed socially beneficial purpose. Does that mean de-identified data can only be used without consent for those purposes? Would consent be required for other purposes? Or are other purposes disallowed?

Given room enough and time, we could go through each of the other “first glance” wins in the Bill. And then we could turn to all the outright losses—not least, a failed attempt to address the growing inadequacy of a purely consent-based regime through the addition of a range of exceptions for “business operations” which, unlike similar provisions in Europe’s General Data Protection Regulation, are not constrained by the recognition of privacy as a human right. Exceptions in cases where “obtaining the individual’s consent would be impracticable because the organisation does not have a direct relationship with the individual” brings to mind the facial recognition purveyor Clearview AI, the company that non-consensually scooped billions of images from the internet and was the subject of scathing findings by the OPC, labelling it an enabler of mass surveillance.

In the end the true test is this: if C-11 had become law, would privacy protections be stronger for people across Canada? Would the power imbalance between corporate data collectors and individuals be mitigated? More concretely, would the recent scandals involving personal information—the Facebook/Cambridge Analytica case attempting to undermine democratic elections, the provision of Clearview AI facial recognition tools to police across the country, the non-consensual use of facial analytics by Cadillac Fairview—have been prevented, or, at least, would those responsible more effectively been required to face consequences? The sad answer is, no. Not only would it have been easier to claim exceptions to consent, but the Privacy Commissioner notes consent violations would be out of scope for administrative penalties.

C-11 is inadequate to address the reality that 21st century business models cast us as not just consumers but as the consumed. Stronger protections are required to ensure that data-driven innovation respects privacy rights, not least because of the downstream impacts on associated rights, including equality. Canada lags in technology adoption, in part because social trust in innovative data-enabled technologies is currently at low ebb. Better privacy laws, better protection, better trust, better business. The good news is, that present that only looked good on the outside was returned to sender when Parliament was dissolved. Let’s hope that its creators learn from its reception and that its replacement is worth keeping.

[Originally published in The Hill Times, October 27, 2021]