Clearview AI is a tech company that the New York Times reports has scraped 3 billion photos of people from the internet, created a facial recognition system to exploit that database, and is marketing access to police forces. 600 police forces, including, the company says, some Canadian police forces, are using this unregulated and under-scrutinized tool. This brings all of the social debate about facial recognition—should it be banned, are there ever cases where benefits outweigh the risks of its use, how can it be regulated (or simply, can it be effectively regulated)—into clear and urgent focus. Because while larger, more responsible companies have been afraid to set this tech loose into the world, while people mull and policy makers deliberate, a fake it, break it, or make it startup has just quietly gone out and made the thing we’ve all been afraid would get made–a system that exploits personal images we share on platforms we rely on and trust as we participate in the modern world–and sold it to police. Who are buying it.
Let’s break down why this is a problem.
To begin, let’s consider why citizens in a democracy need protection from untrammeled facial recognition. The answer? Because it is a threat to human freedom, pure and simple. Facial recognition uses the physical characteristics of our face to create a mathematical model that is unique to us, that identifies us, just like a fingerprint. At CCLA, we think in fact it’s really helpful to talk about facial fingerprinting rather than recognition, because it gives a more accurate impression of what we’re really talking about, an identifier inextricably linked with our body. Taken to extremes, facial recognition let loose on our streets would mean the annihilation of anonymity, a complete inability to move around the world and be a face in the crowd. It would give whoever has access to it great power. Companies could track us and use what they learn to try to influence our consumer behavior. Politicians could use that same information to influence our political behavior. Stalkers could use it to have perfect knowledge of the whereabouts of their victims. States could use it to track anyone, from alleged criminals to protestors, or, just everyone, just in case. That’s a dystopian vision, but it’s actually one that a wide range of folks, even including Google CEO Sundar Pichai, agree could happen unless we’re very careful and very thoughtful about thinking through not just when, but IF to use the technology.
One line of argument in response to this is that it’s our own fault. After all, we all share information, including photos, in the course of participating willingly in modern life. When we share a photo on Facebook, for example, most of us have an audience in mind. If we choose to skip the privacy settings and let anyone who uses the platform potentially see the photo, we might have good reason to do so—perhaps we want to post images that allow potential employers to see what a good, responsible, family-oriented employee we’d make, or perhaps we like the idea that people from all around the world might see and engage with our posts and pictures. Or maybe we’re just not thinking about who else might see it at all. Regardless of reason, it is simply untrue that when we post even a public photo on a social media platform that we’re agreeing that anyone should be able to anything they want with that information. The terms of service for our use of such sites are one sided, take-it-or-leave it contracts, but they are contracts, and most reputable sites prohibit wholesale scraping for commercial use by third parties. We have very little protection from the platforms using our information however they like, but they typically promise us protection from others. We also have privacy laws that govern the terms and nature of consent for uses of our personal information. So, while the online world is under-regulated and our laws are out of date when faced with the potential of emerging technology, Clearview’s application is not operating in a lawless world, just one in which the law seems to be being ignored.
Why is this particularly problematic for a tool being used (secretly) by law enforcement? Simply, because in a democracy, police cannot be above or beyond the law. There’s a public debate that desperately needs to happen around tools of mass surveillance, about the benefits and risks of using indiscriminate information capture about everyone to catch the very few bad guys in a sea of innocent bystanders going about their lives. But what the Clearview AI story tells us is that there is an equally urgent debate we need to have about accountability when it comes to police surveillance. This is unexamined technology that it is feasible to argue is using unlawfully obtained images—certainly, there are questions that need to be asked and answered. Those questions don’t get asked and answered publicly, however, when surveillance technologies are procured and used in secret, as is the case with Clearview AI. No police force asked by the CBC in their pursuit of the Canadian angle of this story would confirm their use of the tool. Yet we know Canadian services are using it: there is even a quote from an unidentified “Canadian law enforcement” officer on the Clearview AI company website. Police often argue that it compromises their work if the investigative tools they use are known to the public. However, social license to exercise the powers we grant our law enforcement bodies can only exist in a trust relationship, and before we even get to the question of whether or not there is social benefit in allowing police to use the technology and if so, whether it outweighs the social risks, we need assurance that our law enforcement bodies are committed to using tools that are lawfully conceived and lawfully implemented.
In a democracy, we make tough choices about what police are and aren’t allowed to do all the time. In Canada, we have a Charter of Rights and Freedoms that lays out fundamental rights that we all have—and deserve—simply because we’re human. Sometimes respecting those rights makes the job of law enforcement officers harder, it makes investigations less efficient, it makes it essential to follow the rules. That’s a trade-off we have agreed to; as a free society we believe our public safety agencies cannot truly keep any of us safe if they themselves are not governed by law and required to uphold our basic values in the course of any and every investigative process.
As a privacy advocate, I get asked all the time whether it’s worth it to get worked up about stories like this, or whether we should just give up on privacy because the horse is already out of the barn.
My response: “The horse is out of the barn” is an inane metaphor and any livestock owner would agree—because if a horse were to wander off, you don’t just sit back and say hey, that valuable horse is gone now, too bad: you go look for the beast. And when you find her, you bring her home and you fix the stall door, or the barn door, or the corral gate—or all three, because you need the horse and it’s worth taking care of her. So with our privacy. It’s valuable. It’s a human right. It’s worth taking care of, and it’s worth fighting for. It’s worth hunting down, bringing home, and finding new ways to protect.
Here’s hoping that the Clearview AI story is another Cambridge Analytica moment, one that crystallizes all that we have to lose if we fail to engage with the risks of new technology as well as being open to its benefits. CCLA reiterates our call for a moratorium on facial recognition software until Canada has had a chance, as a nation, to discuss, debate, and dispute first if, then, only if we get past that question, when and how, this technology should be used in a rights-respecting democracy.
About the Canadian Civil Liberties Association
The CCLA is an independent, non-profit organization with supporters from across the country. Founded in 1964, the CCLA is a national human rights organization committed to defending the rights, dignity, safety, and freedoms of all people in Canada.
For the Media
For further comments, please contact us at firstname.lastname@example.org.