Inside AI Policy

April 29, 2024

AI Daily News

More nations endorse transparency principles for military AI as EO stokes startups’ fear over ‘dual-use’ tech requirements

By Mariam Baksh  / November 28, 2023

While 47 countries are now signed on to principles including a commitment to transparency for responsible deployment of artificial intelligence by their militaries, the details of how that will affect technology also used for commercial purposes -- as described in the recent executive order on AI -- are receiving pushback from companies that fear harm to competition in the industry.

"The United States has been a global leader in responsible military use of AI and autonomy, with the Department of Defense championing ethical AI principles and policies on autonomy in weapon systems for over a decade,” Sasha Baker, undersecretary of defense for policy, said in a Nov. 22 press release.

“The political declaration builds on these efforts,” Baker said. “It advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices.”

The DOD’s press release referred to a State Department notice updated on Nov. 21 with a list of 47 nations that are now signed on to its “political declaration on responsible military use of artificial intelligence and autonomy,” which was first issued in February.

The State Department’s update included a quote from Vice President Kamala Harris on Nov. 1 during her participation in the AI Safety Summit hosted by the United Kingdom. There, Harris, along with representatives from 29 other countries -- including China -- committed to applying international norms and rules to artificial intelligence during what is now known as the Bletchley Declaration.

Among other things, such as the need to prioritize human rights and mitigate bias, both the military and broader Bletchley Declaration -- which notes the potential for AI to help solve global challenges -- call for transparency.

And as the DOD’s release noted, “Military AI capabilities include not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data.”

This broad applicability of the technology for military and commercial purposes is typically controlled by the Commerce Department’s Bureau of Industry and Security as “dual use.” But as the Biden administration’s Oct. 30 executive order attempts to put in place some of the first details for addressing transparency, a group of AI investors and startups took issue with the designation in a Nov. 2 letter to President Biden.

The EO instructs the Commerce Department to require “Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding,” among other things, “the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.”

Model weights are defined in the EO as “a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs.”

“The EO defines a new category of AI models designated as ‘dual-use foundation models,’” wrote the signatories, which include representatives from companies like Hugging Face and Shopify, along with the tech incubator Y Combinator, and Meta, which has staked out a position as the only large incumbent AI company that has made its foundation model open source. “While the definition appears to target larger AI models, the definition is so broad that it would capture significant portion of the AI industry, including the open source AI community.”

But Annie Fixler, director of the Center on Cyber and Technology Innovation at the Foundation for Defense of Democracies, a nonpartisan national security think tank, told Inside AI Policy the EO’s definition of “dual-use foundation models,” actually seems quite narrow.

She pointed to the EO’s designation as specifying companies that would be required to report information regarding their model weights as limited to those “(i)substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.”

Noting the administration’s consultation with industry in drafting the EO, Fixler said, “while companies are rightfully always cautious about requirements and guidelines issued by the federal government, the requirements in the AI executive order are reasonable and align with voluntary commitments that many companies had already signed onto.”

“Those who have expressed concern about the requirements for companies working on dual-use foundation models may be overestimating the impact on their businesses or may be underestimating the security impacts of the government failing to implement reasonable transparency and safety requirements,” she said.