After the Jan. 6 insurrection at the Capitol, the seditionist mob faded into the streets of Washington. Few were arrested on site, one of the many security failures that day. Yet scores of attackers were identified, located, and arrested in the days that followed, all across the country. In addition to their own social media posts, two tools that denied the insurrectionists the impunity they sought were security cameras and facial recognition software. Those two tools may help protect our democracy by putting its enemies behind bars, but those same tools are being used elsewhere to suppress those seeking democracy.
While many incriminated themselves by publicly posting their own images taken while breaking the law, some were identified in images that others took and posted on social media or in images taken by surveillance cameras. Linking photos with a name took place through crowd-sourcing and software that compared faces with those of known individuals in databases such as driver’s licenses, passports, military IDs, and photos downloaded from publicly available social media accounts. Had the insurrectionists followed public health recommendations to prevent the spread of COVID-19 and worn masks, that task would have been more difficult. Yet most of the mob were what we in national security circles politely call “low information” citizens, those who demonstrate little regard for the health and well-being of others.
In Hong Kong and Moscow, however, it is pro-democracy demonstrators who have been wearing masks precisely for the purpose of thwarting their identification by security services. In China, Russia, and other police states around the world, the technology used in the United States to preserve democracy and reduce crime has increasingly been used to arrest peaceful protestors and advocates of democracy and human rights.
Even in the United States, facial recognition software has in a few cases resulted in an innocent person being charged and even jailed. And we know that this technology can be unreliable and discriminate against Black people. So the use of facial recognition software is clearly a double-edged sword, with the promise of taking criminals off the street but also with the risk of abuse, misuse, and mistakes. Law enforcement using our images taken without our consent can also infringe on our notions of privacy.
In many respects, this duality of potential for good and bad in a security technology is not new. Law enforcement labs have also made mistakes with fingerprints and DNA, but standards and certification have significantly reduced the error rate. Courts have long established that when we leave our fingerprints or DNA behind in public places — or at crime scenes — they can be used by police for identification purposes when a crime is being investigated. Without the use of fingerprints and DNA for identification of criminals, many more murders and rapes would never result in convictions and many perpetrators would attack additional victims.
With cameras and facial recognition, standards and legal precedents are not yet fully developed. In their absence, public acceptance of facial recognition is not yet where it is for fingerprints or DNA analysis. Facial recognition technology is coming at us fast and holds out the prospect, beyond law enforcement, of quicker and easier access controls in buildings, the replacement of airline boarding passes, preventing fraud and identity theft in personal bank accounts, and maybe even bringing about the long-sought end of computer passwords.
What is needed for greater acceptance, however, is a trusted third party to audit facial recognition software against an established, transparent standard to ensure that each company that makes such software reliably and accurately establishes the identity of a person regardless of their race, gender, or skin color. Other national standards might set the conditions necessary for use of an image for identification, such as the lighting, angle of observation, and number of features clearly visible. It would also be progress if courts created generally accepted case law or legislatures wrote statutes on when and for what purpose facial recognition may be used in investigations and trials. In this country, no one should be tracked by law enforcement using facial recognition, mobile phones, or any other methods unless there are well-established reasons to believe that a crime had been or was about to be committed.
Having clear, transparent, and equitable laws and standards on the emerging role of facial recognition software in law enforcement would enable the United States to credibly address the abuse of surveillance cameras and facial recognition software in authoritarian states. We should want emerging technology to help us preserve, protect, and defend democracy where we still have it and not be used as a tool of oppression in nations still under the yoke of authoritarian rulers.
Richard Clarke is a former National Security Council official (1992-2003), CEO of Good Harbor Risk Management, and an author of nine books including “The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats.”