Inside AI Policy

May 15, 2024

AI Daily News

DHS preparing new policy, test and evaluation practices for AI acquisition

By Mariam Baksh  / March 19, 2024

The Department of Homeland Security is updating its internal practices for procuring artificial intelligence systems, including through new personnel and infrastructure for testing and evaluation, to comply with the Biden administration’s policies on use of the technology, according to a department “roadmap” for 2024.

“DHS will conduct assessments of new controls and how these controls balance the use of mission enhancing AI with any potential impacts on safety or rights,” reads the roadmap DHS released March 18. “Through oversight by the highest levels of the Department, DHS will ensure all strategies and Departmental guidance are consistent with Administration policy concerning managing risks in procurement and reducing barriers to responsible use.”

The mention of “impacts on safety and rights” is a direct reference to draft guidance the Office of Management and Budget issued for implementing Executive Order 14110. The guidance identified two categories of artificial intelligence systems -- safety-impacting and rights-impacting -- as requiring federal agencies to take certain measures, such as conducting independent assessments -- when contracting with AI vendors.

Industry groups have pushed back on both the independent assessments -- favoring self-certification -- and the OMB categories, advocating the use of risk-based categories instead for triggering requirements such as those justifying their use cases.

According to the roadmap, the department plans to, among other things, update DHS Policy Statement 139-06, “Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components,” which was issued in August 2023.

“DHS will issue enterprise-wide AI policy building on the guiding principles in DHS Policy Statement 139-06,” the roadmap reads, noting the revamp is also “in line with the FY2023 National Defense Authorization Act,” and “will address the acquisition and use of AI; considerations for risks; privacy, civil rights, and civil liberties impacts.”

Through its Office of Science and Technology, DHS will also “establish a Testing & Evaluation working group to support the T&E of DHS systems and publish an Action Plan for T&E of AI/ML enabled systems covering pilots and use cases, algorithm training and test data, acquisition of AI-enabled systems, use of AI for T&E, and AI-enabled adversaries,” according to the roadmap.

And S&T will “create a federated AI testbed that will provide independent assessment services for DHS components and homeland security enterprise operators” with the buildout to include an initial use case and a five-year execution plan, the roadmap says.

The roadmap touts three new AI pilots, with the department noting “From day one, DHS components and offices have coordinated closely with our Privacy Office, the Office for Civil Rights and Civil Liberties, the Office of the General Counsel, and additional stakeholders.”

But in September 2023, after the department had launched its Responsible AI initiative, DHS’ Office of the Inspector General reported that Customs and Border Protection and Immigration and Customs Enforcement agencies did not adhere to the department’s privacy policies, which they also judged to be insufficient, in purchasing and using individuals’ geolocation and other data.

This -- in conjunction with DHS pilot programs over the years -- has made civil rights groups, particularly those advocating for immigrants, wary of DHS’ use of AI.

“For years, DHS has been deploying AI powered technologies at agencies such as ICE and CBP largely in secret,” Julie Mao, an attorney for Just Futures Law, told Inside AI Policy. “Now, because of public pressure and the White House policies, it’s being forced to evaluate these programs.”

According to the roadmap, “as early as 2015, the Department piloted the use of machine learning (ML) technologies to support identity verification tasks. Since then, DHS has successfully implemented other AI-powered applications to enhance efficiencies and foster innovation in border security, cybersecurity, immigration, trade, transportation safety, workforce productivity, and other domains critical to protecting the homeland.”

Patrick Toomey, deputy director of the ACLU's National Security Project told Inside AI Policy, "DHS needs to be far more transparent about its deployment of black box AI systems that have immense implications for the rights of people in the United States.”

“AI roadmaps and inventories are a start, but the public deserves significantly more information about novel tools that DHS is using to screen travelers at ports of entry, search people's cell phones, and monitor people's social media posts,” he said.

Mao added: “At minimum, today’s roadmap reveals that the department has been testing and piloting AI powered tech on immigrants and communities with little public knowledge or consultation.”

Now, she said, “It’s unclear how much the public will have a say in whether these tools get rolled out on them -- as opposed to simply told after the fact.”