scorecardresearch Skip to main content

Getting the First Amendment wrong

In a pending court case, the right to free speech is impinging dangerously on the right to privacy.

Hoan Ton-That, founder of Clearview AI, showed the results of a search for a photo of himself. Clearview AI scrapes billions of photos from the Internet, adds facial recognition, and sells them to government, law enforcement, and immigration agencies.Amr Alfiky/The New York Times

Think of the last time you changed your profile picture on Facebook or Instagram. When you uploaded that photo, did you assume you were agreeing to let anyone do anything they want with that photo, including putting you in a facial recognition database to track your location and every photo of you on the Web? Facial recognition company Clearview AI seems to think so. The company is bolstering its legal team to build a First Amendment argument to help justify its dubious and dangerous facial recognition business. All of our privacy hangs in the balance.

Clearview AI scrapes billions of photos from the Internet, adds facial recognition, and sells them to government, law enforcement, and immigration agencies. Clearview AI apparently wants “to assert a free-speech right to disseminate publicly available photos.” It also wants to gut Illinois’ Biometric Information Privacy Act, arguing that enforcing the statute against the company would violate the First Amendment. BIPA is the most important biometric privacy law in America because it allows people to sue companies directly for violations. If Clearview AI were to succeed in its legal challenges, it would actually inhibit free expression in general and eviscerate many of America’s already limited privacy protections.


Clearview AI is wrong about privacy and wrong about the First Amendment. It would have you believe that the moment you post a photo of yourself on Facebook or walk outside your house, you abandon any privacy interest in your image or your whereabouts because they are now “public.” This is the case whether you are going to the grocery store or to a Black Lives Matter rally. Clearview AI’s position isn’t just wrong as a matter of common sense, but as a matter of law as well. Nevertheless, its legal team is sure to point to language in pre-digital court cases about how there is less privacy in public and to appeal to vague notions of “publicly accessible information.” They’ll wave this concept of public information like it’s a talisman that lets them do anything they want.

But the word “public” is essentially meaningless in the law. It has no set definition and few consequences. It’s like asking whether Clearview AI does “innovation” or “big data.” The “in public” debate is a red herring that distracts from more important issues — like how Clearview AI’s unregulated data practices rip data out of context to harm people, relationships, and public institutions. Clearview AI’s argument boils down to the idea that nothing you expose to anyone else is worthy of protection, but that everything they do with their technology is above the law. That’s a remarkably dangerous position to take in a world where virtually every action we take online and out and about in society leaves us vulnerable in ways we can scarcely imagine.


Facial recognition doesn’t just jeopardize our privacy; it’s a tool for shutting down expressive activity as well. Imagine every single person who protested against racial injustice this summer being identified and tagged as a troublemaker in government systems. Imagine every random photo of you taken at a party, restaurant, or event logged to reconstruct your geolocation history. Facial recognition is a tool for stalking, a tool for shaming people who may have made mistakes, and a tool for government oppression. Clearview AI’s decision to make this claim at a time of both rising authoritarianism and an overdue national reckoning on racial justice sadly reveals which side it is on.


There’s also a rich irony to Clearview AI arguing that government regulations protecting ordinary people from a technology of oppression are an affront to civil liberties. As for the First Amendment, the Supreme Court has carefully balanced privacy and free expression in a series of cases, recognizing that both privacy and free expression are essential civil liberties for any free society. Nor are free speech and privacy always in conflict. Our laws have long recognized that a special kind of privacy — intellectual privacy — is essential to free expression by allowing us to experiment with new ideas and beliefs. The core of the First Amendment’s commitment to free speech is protecting individual speakers like protestors and journalists from government oppression, not giving constitutional protection to dangerous business models that inhibit expression and give new authoritarian tools to governments.

If Clearview AI were to prevail and foist its dangerous reading of the First Amendment on our law, the rest of us would all be worse off, including the technology sector as a whole. A few weeks ago, the European Court of Justice ruled that America’s weak privacy laws raised serious questions about whether European data could be processed in the United States. The European Court was particularly concerned about the extent to which government surveillance in the United States was unchecked, and whether privacy rights were enforceable in American courts. Yet Clearview AI is trying use the First Amendment to ensure a freedom to surveil at will. The European Court’s decision has imperiled a multibillion-dollar trade in data — every Google search and every Facebook status update made by Europeans, for starters. If Clearview AI were to get its way, the only winner would be Clearview AI. And our privacy, our free speech, and American industry as a whole will be the losers.


Woodrow Hartzog is a professor of law and computer science at Northeastern University. Neil Richards is the Koch Distinguished Professor in Law at Washington University School of Law and co-director of the Cordell Institute for Policy in Medicine & Law.