Inside AI Policy

May 9, 2024

AI Daily News

Software group engages with states on AI legislation while urging federal action

By Charlie Mitchell  / September 28, 2023

Legislative activity on artificial intelligence is surging at the state level, according to an analysis by BSA-The Software Alliance, which says it’s engaging extensively with state lawmakers even as the group urges Congress to create a uniform federal approach for addressing high-risk AI use cases.

“The outbreak of state legislation affecting artificial intelligence reflects a broader effort by policymakers worldwide to set initial rules for AI systems,” BSA said in a release on Wednesday.

“In the United States, state legislatures have led in passing digital privacy laws in the absence of federal legislation enacted by the US Congress. It seems likely that states may be poised to write the next comprehensive rules for AI in the United States,” BSA said.

The group finds parallels with action in recent years on privacy, with the European Union and states across the U.S. leading on policy while Congress lags behind.

“We’ve seen this movie before,” BSA vice president Craig Albright said on a call with reporters Wednesday. “What we’re seeing is similar to the privacy wave a few years ago. … The [AI] wave is starting” at the state level.

He forecast that much more state AI legislation is coming next year and that BSA is geared up to engage with state officials. “We have learned from the privacy debate,” Albright said, noting how many national business groups focused unsuccessfully on passing a federal privacy law while state privacy laws were proliferating.

“We’re calling on [state] legislators to pass bills, we’ll work with them,” Albright said of BSA’s legislative efforts on AI, which also includes a push for federal legislation.

Matthew Lenz, BSA’s senior director of state advocacy, emphasized in the release, “Passing a strong national law to address high-risk AI is the best way to ensure that the United States is able to reap the benefits from AI. But states are already moving on AI, and the enterprise software industry is deeply engaged with state leaders to advance constructive solutions that build trust and confidence in the responsible development and deployment of AI.”

The software group found a 440 percent increase in AI-related bills introduced at the state level in 2023, compared to the previous year, and said the 190 bills introduced this year exceeded the number of state AI bills offered in the previous two years combined.

“Bills focused on multiple aspects of AI, including regulating specific AI use cases, requiring AI governance frameworks, creating inventories of states’ uses of AI, establishing task forces and committees, and addressing the state governments’ AI use,” according to a state legislation summary from the software group.

“Connecticut, Florida, Illinois, Louisiana, Minnesota, Montana, Texas, Virginia, and Washington all passed AI legislation. California enacted legislation to conduct a survey of the state’s use of high-risk AI,” BSA said in the summary. “Most enacted bills were related to deepfakes, government’s AI use, including law enforcement, and task forces/committees.”

It said, “Despite the 440% increase in bill introductions, only 29 bills (15%) passed at least one legislative chamber, and only 14 of those became law. BSA anticipates that the volume of AI bills will increase and the likelihood of bill passages will also increase.”

BSA’s Lenz noted on the press call that few of the state bills deal with generative AI and that such measures are expected in the new year. Generative AI “burst into the scene late in the legislative session” for many states, he said, but has already created “a dramatic increase in interest.”

Lenz said BSA “would love to see a federal standard” addressing areas like automated decision-making, which could help “build trust” in AI technology.

Albright pointed to BSA’s priorities for responsible AI innovation and said the group supports requirements for companies to have an AI risk management program, conduct impact assessments and implement best practices.

The focus should be on “high-risk areas” that include access to benefits, credit and housing, Albright said as examples. He said the obligations on companies should “fit their role in the AI system” –- such as whether they are developers or users of AI products -- and must be “workable.”

Albright said he’s encouraged to see a consensus emerging at both the state and federal level around risk-based approaches that include impact assessments.