Inside AI Policy

December 7, 2024

AI Daily News

Critics of AI export control bill emphasize safety dividend from open-source models

By Mariam Baksh / May 14, 2024

Stakeholders weighing in on a new bill with far-reaching export control provisions, and significant political backing, are pointing out its potential to harm open-source artificial intelligence development, which they say could make it harder to improve the safety of AI systems.

Introduced May 8 by House Foreign Affairs Chairman Michael McCaul (R-TX), the bipartisan H.R. 8315 -- the Enhancing National Frameworks for Overseas Restriction of Critical Exports, or ENFORCE Act -- is already scheduled for markup May 16.

It was crafted with support from the White House and is co-sponsored by Rep. Raja Krishnamoorthi (D-IL) who was also behind legislation to ban a Beijing-controlled TikTok from operating in the United States that recently became law.

In December, with the goal of blocking China’s access to the “dual-use” technology, the head of the Commerce Department’s Bureau of Industry and Security which would administer the new powers to control the export and activities related to developing covered AI systems through a licensing system, said the agency was examining ways to regulate open-source large language models.

"The threat of open-source AI models is theoretical, and restricting their development would hamstring competition in the US's tech economy,” Todd O'Boyle, senior director of tech policy for the Chamber of Progress, told Inside AI Policy reacting to the new McCaul bill. “The scaremongering over open-source AI is reminiscent of the same debate over open-source software, which has since proved itself safe enough to power every aspect of the internet we use.”

Indeed, some defenders of open-source AI models -- including a representative of Stability AI, which builds the open-source Stable Diffusion models, and nonprofit open-source AI researchers at EleutherAI -- argue open-source models are safer than closed ones, in part because they allow tinkerers the ability to look under the hood and ‘red-team’ the systems.

“To put better guardrails on AI, Congress should focus on addressing consumer harms rather than theorizing about what type of models work best," O’Boyle said.

Nick Garcia, policy counsel for the consumer rights group Public Knowledge noted a public comment process at another Commerce agency -- the National Telecommunications and Information Administration -- where he said there was “a huge amount of engagement that pointed to the massive benefits of maintaining an open AI development and research ecosystem.”

He said it would make sense for lawmakers to “see what expert insights the NTIA is able to distill from its proceedings, before jumping to interventions that might harm American innovation and competition,” adding, “requiring government licensing for a broad range of activities related to AI development and paving the way towards potentially draconian restrictions on open-source AI model development and research is a concerning prospect.”

“There is a careful balance to strike here,” he said. “We want to protect American competitiveness and national security interests, but the openness of the AI research and development ecosystem makes America more competitive, innovative, and results in AI systems that are safer and more accountable.”

A report on the NTIA comment process is due July 26 under President Biden’s Oct. 30 executive order on artificial intelligence.