Congress is requiring the Department of Homeland Security to explain how it plans to ensure Fourth Amendment protections while using a massive surveillance tool powered by artificial intelligence.
The Repository for Analytics in a Virtualized Environment, or RAVEn, was developed in 2020 by the department’s Homeland Security Investigations unit which resides within the Immigration and Customs Enforcement agency, with AI meant to assist investigators in finding leads for possible arrests.
The six-bill fiscal 2024 appropriations package set for passage over the weekend included DHS funding for procurement of the technology and may add to pressure on the department to be more transparent and judicious about its AI use.
“Within 90 days of the date of enactment of this Act and quarterly thereafter, HSI shall brief the Committees on projected maintenance costs associated with RAVEn; intended integration of artificial intelligence capabilities; and proposed guardrails to ensure privacy-related concerns are addressed,” reads an explanation of the bill released March 20.
The bill provides tens of millions of dollars above requested levels for border security programs, including $163,547,000 for integrated surveillance towers and $279,875,000 for the National Targeting Center, both of which are also controversial initiatives involving the use of AI.
Civil society groups have been sounding the alarm, with a June 2023 report by the Brennan Center for Justice noting issues of bias and accuracy associated with AI along with uniquely opaque reporting structures at ICE since the agency’s inception.
A roadmap that DHS released March 18 noted plans to update policies for AI procurement and establish independent assessment procedures -- as required by draft Office of Management and Budget guidance for implementing President Biden’s Oct. 30 executive order -- for AI systems.
OMB’s draft guidance specifically lists immigration-related decisions such as determining detention status among AI use cases that should be considered “rights-impacting” and would therefore require certain minimum risk mitigation measures including an AI impact assessment.
Referencing the EO, Julie Mao, an attorney for the group Just Futures Law, told Inside AI Policy, “We understand RAVEn is an HSI program that’s meant to power mass detention and deportations. It’s terrifying to think that ICE will be adding more advanced forms of AI to these mass surveillance tools. We are looking at the automation of the most important rights-impacting decisions in our society: when to incarcerate, deport, and separate families.”
Among new pilot programs the roadmap flags for advancing generative AI at DHS using tools from Microsoft, Anthropic and Meta one would further enhance HSI’s investigative capabilities. DHS highlighted uses related to crimes such as sex trafficking and fentanyl tracking. But civil society groups, particularly those advocating for the rights of immigrant families, remain concerned about mission creep, and access to databases like RAVEn due to porousness between HSI’s criminal investigations and ICE’s Enforcement and Removal Operations.
“Congress and the secretary of homeland security should establish greater separation between ERO and HSI to acknowledge what HSI’s own senior staff has long said: the two components have distinct missions and the department’s structure should reflect that,” the Brennan Center report reads. “Such a realignment would reduce structural pressures on HSI to involve itself in immigration matters with no nexus to serious crimes and limit ERO’s access to HSI’s array of surveillance tools and troves of personal data.”
Specifically addressing DHS’ AI plans, the Brennan Center report added: “AI tools can absorb the gender, racial, and political biases of the data on which they are trained. Despite the notorious difficulty of measuring AI systems’ accuracy, DHS has said almost nothing about how RAVEn’s AI tools will be tested for accuracy, instead describing in general terms a review by HSI Innovation Lab personnel. No independent review seems to be contemplated, and no testing metrics have been articulated.”
As noted, the DHS roadmap now does contemplate independent review of its AI systems, but the department did not respond to a request for more details on what their testing might involve except to say that internal entities such as the department’s office of counsel and office of civil rights and civil liberties would be involved.
