scorecardresearch Skip to main content
EDITORIAL

Facial-recognition tech may have value, but real-time surveillance goes too far

Software that identifies people in videos could create a privacy nightmare. A patchwork of city bans won’t prevent it.

A security cctv camera at the Olympic Stadium at the Olympic Park in London, March 2012.Sang Tan/Associated Press

If you stroll through some parts of London, facial-recognition systems linked to street cameras analyze whether you look like someone wanted for a crime. This technology isn’t very accurate — it often makes false matches — and warrantless surveillance that can identify people in public in real time corrodes individual rights that ought to be secure in a democracy.

In the United States, such real-time facial surveillance could be on the horizon. Even today, it’s troublingly unclear which police forces are using facial recognition, and how, because the technology is largely unregulated. It’s time to change that, including here in Massachusetts.

Advertisement



Boston Police don’t use facial-recognition software, according to Sergeant Detective John Boyle, a spokesman. However, police in many cities, including New York, commonly do use it to identify suspects after a crime has been committed. Even if they don’t run real-time face surveillance, investigators can put the image of a face captured on, say, a store’s security camera into programs that look for matches in databases of mug shots, driver’s license photos, or billions of pictures scraped from social media sites.

The technology’s accuracy is questionable even in this context, especially when it comes to identifying women and minorities. That’s why Boston Police Commissioner William Gross has said facial recognition is not yet reliable enough to be worthwhile. Yet there’s little oversight of the police around the country who do use it. And in most places, there’s nothing explicitly preventing authorities from following London’s lead and using the software on live camera feeds, turning passersby into unknowing participants in a virtual police lineup. Clare Garvie and Laura M. Moy of Georgetown Law’s Center on Privacy and Technology have documented how Detroit and Chicago have acquired and could activate this capability.

Advertisement



Too often, when it comes to new technologies that erode privacy — whether it’s data collection by search engines or monitoring by home devices — the knee-jerk response is to declare that resistance is futile. An investor in Clearview AI, which supplies police with face-matching software, blithely told The New York Times that such technology “might lead to a dystopian future or something, but you can’t ban it.”

He’s wrong about that. Several cities, including San Francisco, Somerville, Cambridge, and Brookline, have blocked public agencies from using facial recognition. California has enacted a three-year moratorium on the use of the technology in police body cameras. The European Union has considered a broader moratorium — a move even Google would support.

Proposed limits on facial recognition have bipartisan backing in Washington — and on Beacon Hill. Bills pending in the Massachusetts Legislature would put a moratorium on public facial-recognition systems until lawmakers devise rules on how they can be used.

It would be wise to approve that moratorium in the current legislative session, unless state lawmakers can somehow quickly come up with meaningful restrictions. Used indiscriminately, facial recognition threatens to upend our concept of anonymity, inhibit our freedoms of assembly and expression, and exacerbate racism in the justice system. Eventually, however, it should be possible to put down rules that would preserve privacy and allow police to use aspects of the technology responsibly.

For example, after-the-fact identification of crime suspects or victims could be acceptable if accuracy improves and if lawmakers impose limits that don’t exist now, like requiring that police use the technology only to solve violent crimes, and only if investigators can show as much probable cause as needed to obtain a judge’s wiretap order. Forensic specialists also should set standards for when facial-recognition data is good enough to be admissible in court.

Advertisement



But real-time face dragnets — as practiced on the streets of London and China — should be outright banned in the US, as criminal-justice watchdogs have suggested.This more active form of surveillance is opposed by conservatives and liberals alike, because it amounts to the virtual imposition of a “show me your papers” regime in public places.

Democracy is well served by rules that define when and how law enforcement can use many technologies, from telephones to GPS trackers. The goal should be to balance protecting individual liberty with fighting crime. Everyone benefits from technology used in only reliable and fair ways: Real criminals are identified and the innocent are protected. By hitting the pause button now, lawmakers in Massachusetts and around the country can achieve this balance for facial recognition.


Editorials represent the views of the Boston Globe Editorial Board. Follow us @GlobeOpinion.