Inside AI Policy

May 8, 2024

AI Daily News

Public-interest advocates stress need for ‘real-world’ testing of facial recognition technology

By Mariam Baksh  / March 13, 2024

Testing conducted by the National Institute of Standards and Technology on facial recognition tools is not enough to justify federal agencies deploying the artificial intelligence capability, according to civil rights proponents arguing for transparency and other regulatory safeguards.

“Proponents of law enforcement use of facial recognition often claim that algorithm testing conducted by NIST provides sufficient independent validation of system performance. This is false,” said Katie Kinsey, chief of staff and technology policy counsel for The Policing Project at NYU Law School.

Kinsey was testifying March 8 before the U.S. Commission on Civil Rights regarding use of the technology by the departments of Justice, Homeland Security, and Housing and Urban Development.

“Although NIST’s testing provides an important benchmark of algorithms’ technical capabilities, NIST doesn’t test these algorithms on the actual, low-quality images used by law enforcement,” Kinsey said.

DHS officials testifying before the commission cited their use of “high performing algorithms,” as tested by NIST, and referenced a September 2023 directive requiring their partners to do the same, although they declined to confirm that was actually happening.

But as facial recognition companies also use the NIST vendor test to market their products, Kinsey and other public-interest advocates, as well as lawmakers like Sen. Ed Markey (D-MA), are pointing to a need for the technology to be tested in the real-world conditions in which they are actually used.

“No matter what you might hear from other panelists today or read in press releases from facial recognition vendors, here’s the truth: there is no publicly available, independent testing of facial recognition technology as it is actually used by law enforcement for criminal investigations,” Kinsey said.

In an exchange during the hearing with Hoan Ton-That, founder and CEO of Clearview AI, Commissioner Mondaire Jones asked whether the facial recognition company has “conducted or facilitated and then publicly released the results of any operational testing of its [Facial Recognition Technology] service as used by law enforcement agencies.”

Ton-That, who has now repeatedly expressed support for regulation, including at the Senate’s AI Insight forums in November, said the service provides the means for such evaluations, but that it was up to its users to release the results of any such testing.

“We're providing tools right now, it's very easy for any user of Clearview's administrator to go in and generate a report on how many searches, what type of crimes have been solved with it, and so on, who's doing the searches,” he said. “I think there's more we can do as a vendor. But at the end of the day, it comes down to those law enforcement agencies and their willingness to share and unfortunately, there's not that many of them who want to be as transparent about how they use it.”

Ton-That added, “we store and house their data though we don't have direct access to it. It would be in violation of their expectation around sensitive criminal investigative data to share that without their consent.”

Kinsey seized the opportunity to drive home her point.

“Without regulation, we're not going to get the kind of transparency and information that we need,” she said. “The response to your question about operational testing that Clearview does was basically 'Well, that's up to the agencies.' You're going to continue to get this kind of buck passing if there isn't regulation that requires it and sets these responsibilities.”

Kinsey’s testimony endorsed provisions in draft Office of Management and Budget guidance for implementing the Biden administration’s Oct. 30 executive order on artificial intelligence as a good place to start.

“In its recent draft guidance on federal agency use of artificial intelligence, [OMB] likewise made clear that if federal agencies want to use AI like facial recognition, they ‘must conduct adequate testing to ensure the AI … will work in its intended real-world context,’ which means ‘[t]esting conditions should mirror as closely as possible the conditions in which the AI will be deployed,’” she said.

“When it comes to law enforcement use of facial recognition, this type of independent, real-world testing simply does not exist,” Kinsey said in written testimony submitted to the commission. “Or if it does, it has not been made public. And without this kind of testing, there is no way for the public to know how accurate or biased these systems are for law enforcement use.”

A public housing advocate testifying before the commission also noted the widely documented challenges the technology has with accurately identifying minority populations and advocated legislation proposed by Rep. Yvette Clarke (D-NY) -- the No Biometric Barriers to Housing Act -- to address related issues of surveillance and discrimination by HUD grantees.

Michelle Ewert, director of Washburn University Law School’s legal clinic, said HUD should consider conditioning contracts with housing providers to ban the use of the technology regardless of how it’s funded, but short of that, such contracts “should include clear parameters for use of these technologies, training for agency staff and housing providers on the technologies, and audits to ensure the technologies are reliable.”

Commissioner Jones issued a scathing response to the absence of both HUD and DOJ at the hearing, which commission staff said was their only opportunity to provide public comments or information on the subject before a report due Sep. 30 on the commission’s investigation is published.

“It suggests to me that DOJ and HUD are embarrassed by their failures and are seeking to avoid public accountability,” Jones said. “I also believe that their approach is a strategic error, because now Congress is going to pay even closer attention…so will the American people.”

DOJ did not respond to a request for comment.

“HUD does not use any facial recognition technology and urges its program participants to find the right balance between addressing security concerns and respecting residents’ right to privacy,” a spokesperson for the department told Inside AI Policy adding it “plans to submit written testimony and welcomes future opportunities to collaborate where appropriate.”

The commission is accepting comments through April 8 for incorporation into its report.