Facts, Metrics, and Accountability Needed to Combat False Narratives about Misuse and Racial Bias
Last week I read several articles which repeated misguided assertions regarding facial recognition technology. As someone who has been immersed in this technology for years, I can attest that many of the assertions are unjustified, misplaced, and misleading to the general population. As I receive my daily news alerts on facial recognition and read the headlines about how law enforcement uses this technology, I see language that reflects a misunderstanding about how the technology is used in practice, and ignorance of facial recognition’s core value with regard to public safety.
What concerns me? Statements like facial recognition causing “failures” in false returns and misidentifications. Law enforcement “inappropriately” using this technology and maliciously accessing photos of innocent law abiding citizens. Police “misusing” this technology and “violating” the rights of people by “targeting” people of a particular race, creed, or color. From my view, this is a clear pattern of writing designed to instill fear and create negative feelings among members of the public who have no understanding of how the process really works.
Practitioners like me know better. I will firmly state this is a proven technology that provides great and growing value to public safety.
But, I will also firmly state that law enforcement users of facial recognition technology must live by a clearly defined process and rock-solid policies that are rigorously exercised and audited. If we don’t actively demonstrate the accountability we all live by, we create open space for fear mongers to air their falsehoods.
A report issued by the Center on Privacy and Technology at Georgetown Law Center on October 18, 2016, makes alarmist claims about the use of facial recognition technology by law enforcement. It calls for greater oversight and accountability in the use of facial recognition technology by law enforcement. It sensationally claims that the queries performed by law enforcement are biased because a disproportionate number of mugshots in the system are people of color. The American Civil Liberties Union (ACLU) and dozens of other groups now want the Department of Justice to investigate the uses of facial recognition technology within law enforcement because they believe public safety agencies are violating the rights of Americans, especially communities of color. The Georgetown Law report also suggests that more than half of Americans are now being searched by facial recognition systems.
As an example of perceived misuse, the Georgetown report indicates facial recognition technology was used in Maryland during protests of alleged police misconduct to identify anyone in the crowds who had active arrest warrants. Civil liberties groups believe this is a violation of citizens’ First Amendment right to free speech. They are calling for legislation to require law enforcement to establish probable cause before even using facial recognition technology. Apparently, the authors ignored the fact that if you don’t know who you’re looking for – which is a primary reason we use facial recognition – it is impossible to establish probable cause.
The Georgetown report also calls for Congress and States to impose strict regulations on the use of facial recognition technology. They recommend that systems should only search against mugshots and not driver’s license photos or other photo IDs. They want any facial recognition comparison of images in the Department of Motor Vehicles (DMV) or other photo ID systems to require a court order, and to be limited to serious crimes only.
The Georgetown report further states law enforcement should occasionally eliminate innocent people from any search, and a ban should be enacted on video surveillance on the basis of political or religious beliefs, race, or ethnicity.
From my perspective, all of these recommendations reveal a profound misunderstanding by the authors of how facial recognition technology is actually used in police work. Before addressing the false narrative of facial recognition deepening racial bias, I want to address some of these “off the mark” recommendations.
Public safety agencies are not in the business of using facial recognition technology to violate a person’s rights. There are no cases which support that theory.
When a person in the United States wants to get a license for the privilege of driving on public roadways, they are required to have a photo taken. That photo must appear on the license document. As most realize, all states recognize these driver identifications as valid forms of identification, and because of this, each state has its own motor vehicle database. As the Georgetown report indicates, decisions to allow law enforcement access to these records vary from state to state. Some allow access and others do not. However, to suggest probable cause must be established before any facial recognition technology can be used to compare a probe image to a gallery of drivers’ license images is simply not practical.
A primary goal of facial recognition in law enforcement is to ascertain a known identity from an unknown face in an image. What if someone commits a crime and does not have a prior mugshot? How else could the person be identified? Having access to a larger database like a DMV database supplements investigations by giving law enforcement agencies a valuable lead to find criminal suspects. When search results return many “faces” in the database, it is up to the facial recognition user to individually examine each face and offer a subjective analysis. If proper two level verifications are integrated into the workflow, false positive rates can be minimized.
Another impractical recommendation from the report is to limit facial recognition analysis to “serious” crimes. Anybody who understands police work knows that some people commit low-level crimes and some commit heinous acts, never having committed a crime before. Why should law enforcement not be able to generate an investigative lead quickly – regardless of the type of crime – if that lead could be sitting in a DMV database? Both murderers and child porn purveyors ultimately need to be identified – so why should it be harder to generate a lead in a child porn case than a murder case?
A good example of this is the 2014 arrest of Ronald Dwaine Carnes of Iowa. He escaped from prison in 1973, while serving twenty years for robbery with a dangerous weapon. He remained under the radar, living under the identities of two different persons to avoid detection by law enforcement officials. He kept himself hidden by assuming the names and identities of William Cox and Louie Vance, and worked as a waiter, cab driver, and computer technician in New Orleans, Louisiana and the US Commonwealth of Puerto Rico. During his 2014 arrest, police found birth certificates, social security numbers, and a firearm. He was charged accordingly. But how did police catch him after four decades? It was law enforcement properly using facial recognition technology to search against a database of photos which included millions of law abiding citizens. Mr. Carnes applied for a driver’s license in the state of Iowa under another identity, and the state’s DMV facial recognition system detected it.
Facial recognition is not always related to criminal investigations. It can be utilized for other very important humanitarian purposes. As a detective, I used facial recognition for various non-enforcement reasons including identifying elderly persons stricken with dementia, finding lost and missing children, identifying homeless persons with mental illness, and identifying deceased persons. Having access to a variety of image databases greatly assisted with a timely identification of subjects in many of those non-criminal cases. These are just a few routine examples showing how law enforcement’s proper and effective use of facial recognition technology protects and helps people. In light of these facts, the Georgetown report’s alarmist assertions and policy proposals need to be taken with a grain of salt.
In a recent news report to WBAL-TV11, a representative of the Maryland State Police admitted there are no records kept which show how the agency is using facial recognition technology. The report further states members of the Baltimore Police Department can perform identity verifications in the field, but do not have recording mechanisms in place to track officers using this technology. Another report states the ACLU has found the Maryland system has not been audited in five years. Are these concerns valid? Yes.
It is always recommended that any agency electing to use facial recognition capabilities document each search and have full auditing capabilities. Doing so ensures all the officers within law enforcement agencies are utilizing this technology properly, and brings needed integrity to an already unfairly criticized technology.
Vigilant Solutions FaceSearch and LineUp facial recognition offerings are the standard model and platform for facial recognition in law enforcement. Vigilant realizes the need for security and accountability by taking a proactive approach to integrating full auditing capabilities at the Agency Manager level. We provide a self-regulating inquiry system which meets CJIS compliance standards, giving every agency full dashboard monitoring capabilities so they can gather agency metrics, manage their photo galleries, and assist in the supervision of the personnel who access these law enforcement systems.
Read more here: Best Practices to Prevent Facial Recognition Misuse and Bias.