Artificial intelligence (AI) is being used for many different reasons – many exciting and many terrifying. One use of AI is to recognize a person’s face. The core ideas are that software developers:
- Design programs that start with a current face, such as the face a person presents to a security system or a face that has been identified by law enforcement.
- Create and access a database of facial images from many different sources.
- Program the artificial intelligence program to scan through all those images in the database (or a select subset of those images) to determine if there is a match with the initial face.
- Analyze various issues, such as determining which images to scan. The database may have billions of images and features in the initial image and the target image should be analyzed to show there is a match between the current face and a face in the database.
When you seek access to a building, for example, the AI program may compare your face to faces that have acceptable admission status (generally hundreds or thousands of images). When law enforcement looks for a match, they look through one or more databases that contain hundreds of thousands, or even billions, of images, since there may be multiple images of the same person.
A few examples
According to researchers at the Massachusetts Institute of Technology McGovern Institute for Brain Research, “millions of years of evolution that have shaped circuits in the human brain have optimized our system for facial recognition.” The study found that the human brain separates faces from other objects. In the same manner, AI software now tries to separate human faces from other objects. That may seem an easy task for humans, but just learning to distinguish one face from another object is a complex sophisticated computer software problem – so eliminating anything that doesn’t look like a face saves a lot of computer power.
The New York Times reported in March of 2021, that the Department of Homeland Security used AI software called Clearview AI to help identify a suspect. The team behind Clearview AI “had scraped the public web — social media, employment sites, YouTube, Venmo — to create a database with three billion images of people, along with links to the webpages from which the photos had come.” This review was magnitudes of levels more sophisticated than current law enforcement databases that review mug shots, driver’s licenses, and passport photos.
“The case represented the technology’s first use on a child-exploitation case by Homeland Security Investigations, or H.S.I., which is the investigative arm of Immigrations and Customs Enforcement.” In response, numerous social media companies sent cease and desist letters to Clearview demanding “to no avail” that Clearview stop using the social media images.
Computers once performed facial recognition rather imprecisely, by identifying people’s facial features and measuring the distances among them — a crude method that did not reliably result in matches. But recently, the technology has improved significantly, because of advances in artificial intelligence. A.I. software can analyze countless photos of people’s faces and learn to make impressive predictions about which images are of the same person; the more faces it inspects, the better it gets.
Some jurisdictions and legislators have sought to ban the use of facial recognition analysis or to at least require that a judge authorize a warrant to search the databases that AI uses in criminal cases. Lawsuits have also been filed to give citizens control over how their data is used and to complain that the use of these images to recognize faces is an invasion of privacy.
There is no current federal law (as of the NY Times reporting date) that prohibits or regulates the use of facial recognition AI. Among other defenses, Clearview asserts that its AI software is protected by the First Amendment. A major concern raised by opponents of the use of facial recognition software is that AI could be used to observe people in real-time through the broad use of surveillance cameras, in addition to digital images that are obtained in more common ways.
Why facial recognition AI isn’t reliable
The New York Times article reports that a major concern about facial recognition AI is that it isn’t reliable enough for law enforcement to use. One agency, the National Institute of Standards and Technology (NIST), regularly tests the “accuracy of facial-recognition algorithms voluntarily submitted by vendors.”
While Clearview hasn’t volunteered, the NIST found in 2019 that many facial recognition AI programs “were less accurate in identifying people of color, meaning their use could worsen systemic bias in the criminal-justice system.” In 2020, there were three cases in which the wrong person was arrested using facial recognition AI. All three of those people were Black.
Other studies and examples of facial recognition AI articles, according to another New York Times article, indicate a gender bias as well as a skin-type bias. “The darker the skin, the more often the errors arose.” Some of the reasons there are errors (along with inherent flaws in the software programs) are insufficient training, inadequate guardrails, data sets that have racial bias and other errors, the lack of accountability for false arrests, an overall lack of transparency, and a lack of agreement about whether the software should be used at all.
Proposals by the Office of Management and Budget (OMB) would establish some minimal guardrails, but so far those proposals are not current law. Other agencies, such as the Government Accountability Office, (GAO) are also reviewing the use of AI.
A major part of any criminal case for the prosecution is demonstrating that you are the person who committed the crime. When eyewitnesses aren’t available, law enforcement often turns to technology to help prove a person’s identity.
More and more, that technology includes facial recognition AI. The reliability of facial recognition AI and the legal use of that software are issues in criminal cases that involve identification concerns. To discuss all aspects of your criminal case, including facial recognition AI, call us or contact us now to schedule a consultation. Carey Law Office has locations in Bowie, Crofton, and Dunkirk. We also serve Calvert County.
My name is Joe Carey, and I am the founder and principal attorney of the Carey Law Office. I have lived in Maryland my entire life. I grew up in a small town in Prince George’s County and, with the help of my partner in life, Nancy, I raised my family here: three exceptional children (a son and two daughters), and two goofy, spoiled black Labrador Retrievers. Learn More