A white paper from the international law firm Steptoe provides a guide to existing law, rules and pending regulation in the national security space that apply to a range of artificial intelligence issues going beyond weapons systems to include export controls, critical infrastructure controls and algorithms.
The law firm’s analysis pushes back on the general perception that federal laws and rules urgently need to be updated to address issues related to the growing pervasiveness of AI technologies, arguing that many existing “legal regimes” apply to AI.
“Much of today’s discussion on AI centers around the lack of laws and regulations and the need for policymakers to catch up to rapidly evolving industry developments,” according to Steptoe’s “Artificial Intelligence and the Landscape of U.S. National Security Law,” released July 30.
“Despite this narrative,” the paper says, “AI is already subject to a significant number of national security-related laws and several new legal regimes will be implemented in short order. These national security-related regimes can apply to obvious cases such as the use of AI in weapons systems, but can also apply to AI with no clear, direct connection to national security.”
For example, it says, “AI systems used in critical infrastructure, AI algorithms that power social media feeds, and generative AI that can create so-called ‘deepfakes’ are just a few examples of AI systems that may implicate a number of US national security laws.”
The 40-page report goes through President Biden’s AI executive order of Oct. 30, 2023, standards work at the National Institute of Standards and Technology and Department of Homeland Security, customer identification rules, reporting requirements on large language models, export controls, information and communications technology supply chain rules, personal data controls, government contract requirements and more.
“While US policymakers are concerned about strategic competition with a number of foreign rivals and adversaries, there is no doubt that China is the country of greatest concern to US officials with respect to AI and national security,” the paper says.
“Of the various legal regimes and provisions discussed in this white paper, some are laws of general applicability applying regardless of jurisdiction, some target a handful of jurisdictions viewed by US officials as particularly problematic, and some target a single country such as certain export controls measures against China or Russia,” it says.
“Certain laws discussed herein apply broadly to transactions or other dealings that implicate US national security, generally, while others apply specifically to AI,” it explains. “AI systems rely on two fundamental building blocks: (1) advanced semiconductors needed to provide sufficient computing power to train, and in some cases operate, AI models and (2) significant quantities of data used to train AI models.”
“Both of those building blocks,” the paper says, “are also subject to a range of US national security laws and, while this paper focuses on AI software, it will also touch on those elements.”
Reporting, export controls
The paper notes that Biden’s executive order includes specific reporting requirements for companies “developing or demonstrating an intent to develop potential dual-use foundation models,” as well as reporting by persons that “acquire, develop, or possess a potential large-scale computing cluster,” including “the existence and location of these clusters and the amount of total computing power available in each cluster.”
Further, it says, “With respect to infrastructure as a service (IaaS), the AI EO directs the Department of Commerce to require IaaS Providers to report to Commerce ‘when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.’ Such reporting obligations must also be flowed down to ‘foreign resellers’ of the IaaS Product.”
And, “The order further directs Commerce to issue rules requiring IaaS Providers to ‘ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller.’ Commerce has taken additional steps to implement this portion of the AI EO in a new notice of proposed rulemaking (NPRM).”
The latest regulatory agenda for the Commerce Department sets December as the target for finalizing that proposal. Major companies and industry groups weighed in with calls for changes to the proposal, during a comment period that closed April 29.
On export controls, the Steptoe paper says rules from the Export Administration Regulations “can apply to AI in a number of complex and sometimes unexpected ways. These include potential controls on an AI model or system itself, as well as the potential for an AI model to generate export-controlled content or to have export-controlled content in its training data.”
“With respect to AI,” it says, “the EAR can apply to AI software systems and physical infrastructure (e.g., advanced semiconductors and semiconductor manufacturing equipment) used to train or operate AI systems. It can also apply to material included in training data and to content that is generated by AI systems (e.g., a large multimodal model that provides a technical description or produces a detailed image or blue print).”
The paper says, “While there is no ECCN [export controls classification number] that broadly controls general purpose AI software, there are dozens of ECCNs that could potentially control a given piece of application-specific AI software. Some of these ECCNs specifically describe AI software while most are broader ECCNs, often called ‘catchall’ ECCNs, that apply broadly to certain types of software.”
And, it says, “Regardless of whether the AI software itself is controlled, it is important to consider whether any of the model’s training data or outputs might be controlled.”
First, the paper says, “having controlled training data makes it more likely that a system will produce controlled outputs.”
“Second,” it says, “having controlled training data could lead to a violation if the data is exported, reexported, or transferred or a ‘deemed export’ or ‘deemed reexport’ occurs.”
And “Third, having controlled training data makes it more likely an AI model will generate content that is ‘subject to the EAR.’”
ICTS supply chain
The paper includes a detailed discussion on the scope and implementation of the Commerce Department’s 2023 final rule on transactions involving information and communications technology and services.
It says, “The listed technology categories include, among other things, so-called ‘emerging technology.’ ‘Emerging technology’ in this context is defined as ‘ICTS integral to artificial intelligence and machine learning, quantum key distribution, quantum computing, drones, autonomous systems, or advanced robotics.’ Thus, the ICTS Rule explicitly regulates ICTS transactions involving AI technology.”
Steptoe says, “Although the manner in which the Commerce Department may ultimately utilize the ICTS Rule is still uncertain, its applicability to AI is unquestioned and its potential impact could be substantial, as it provides Commerce with broad and highly discretionary authority to prohibit or impose conditions on transactions involving AI with sufficient ties to a ‘foreign adversary.’”
The report notes, “Commerce recently appointed the first Executive Director of the Office of Information and Communications Technology and Services (OICTS), which is charged with implementing the rule. It seems likely that as Commerce continues to develop its team, expertise, and regulatory and enforcement infrastructure, it will move to employ the ICTS Rule more frequently and aggressively, including as a means to regulate the use and availability of certain AI products and services.”
Within the ICTS context, Steptoe details new proposed regulations for autonomous vehicles that are unprecedented and would clearly sweep in AI.
“On February 29, 2024, Commerce announced a first of its kind action by initiating a rulemaking to prohibit or impose conditions on certain transactions involving foreign technology used in so-called ‘connected vehicles’ or ‘CVs,’” the paper says.
“The ANPRM explains that BIS is concerned with a wide-range of national security risks, including those posed by fully autonomous vehicles and vehicles with self-driving features or modes, many of which are powered by AI,” Steptoe says. “Therefore, AI is clearly one of several key motivating risks behind the rulemaking process. While the ANPRM is the first time BIS has sought to implement restrictions on a class of transactions under the ICTS rules it is unlikely to be the last.”
It says, “As OICTS continues to build out its capabilities and pursue its core policy objectives, it seems likely additional classes of transactions will be targeted in the future. AI and AI-powered products would seem to be among the most likely targets of such measures.”
The paper also looks into regulation on bulk transfers of personal data to “countries of concern” under a 2024 law and executive order.
A notice of proposed rulemaking is expected this month, according to the regulatory agenda for the Department of Justice.
“Regardless of the scope of the final restrictions, the AI industry is likely to be significantly impacted given the importance of using vast quantities of data to train AI models and the ability of AI models to review data to identify trends and make connections between seemingly unlinked data points,” the Steptoe report says. “The restrictions are likely to present a number of compliance challenges for AI companies, many of which operate on a global basis and pool talent from leading AI researchers located around the world.”
