A new artificial intelligence “transparency” law enacted in California contains positive elements but the value is undermined by implementing such regulatory requirements at the state rather than federal level, the tech-backed Center for Data Innovation says.
“California has passed a new AI safety law and supporters are touting a trifecta of benefits: protecting innovation while advancing safety, filling a regulatory gap left by congressional inaction, and positioning the United States as a global leader on AI safety,” CDI’s Hodan Omaar writes in an Oct. 3 posting titled “California’s AI Safety Law Gets More Wrong Than Right.”
“On substance,” she says, “the law has some merits, and had it been enacted at the federal level, it could have marked imperfect but genuine progress. But by adopting those provisions at the state level, California does more harm than good on the very same fronts it claims as strengths.”

Hodan Omaar, Senior Policy Manager, Center for Data Innovation
Omaar writes, “The law undermines U.S. innovation by fragmenting the national market, makes bipartisan compromise on a national AI framework more difficult, and blurs America’s position on AI governance.”
California Gov. Gavin Newsom (D) signed SB 53 into law on Sept. 29, setting the stage for implementation of first-time transparency “guardrails” on the largest companies that develop frontier AI models.
The bill was largely opposed by major tech industry groups and supported by civil society groups, though it generated much less heat -- on both sides -- than other AI bills considered in the Golden State legislature this year.
The legislation did pick up some industry backing, suggesting aspects of the bill could attract attention in Congress if lawmakers are looking for modest new requirements on AI developers to pair with a federal moratorium on state AI regulation.
“The law focuses on frontier AI developers, defined as companies training AI systems using more than 10²⁶ floating-point operations (FLOPS) of compute. They must notify the state before releasing new or updated models, disclosing release dates, intended uses, and whether access will be open-source or via API,” CDI’s Omaar explains in her post.
“All developers must report catastrophic safety incidents within 15 days -- or within 24 hours if lives are at risk. Larger firms with more than $500 million in annual revenue face additional obligations. They are required to publish and update safety frameworks, conduct catastrophic risk assessments and submit summaries to the California Office of Emergency Services, and implement strong cybersecurity to protect unreleased model weights.”
Further, she writes, “They must also maintain anonymous whistleblower channels, update them monthly, and provide quarterly summaries to senior leadership with protections against retaliation. The Attorney General can impose fines of up to $1 million per violation.”
But Omaar says, “The law has serious shortcomings: its blunt revenue threshold penalizes firms based on their size rather than their risk profiles, and its compute cut-off misses smaller but still capable models.”
On the other hand, “there are also important elements worth commending,” she says, singling out incident reporting and whistleblower protection provisions, while noting a “flexible approach to transparency.”
“By requiring firms to publish their own safety frameworks and submit high-level risk summaries to state officials,” Omaar writes, “the law leans on public and market-facing pressures rather than solely on centralizing oversight in government oversight. That helps it avoid the trap Virginia has proposed, where routing everything through a single authority turns accountability into a paperwork exercise.”
“Had these provisions appeared in a federal law on AI safety,” she says, “their flaws might not have outweighed their value. One could reasonably argue for adjusting the revenue threshold to be size-neutral and for replacing crude compute cut-offs with capability-based criteria that could evolve over time. In that context, a federal statute with such elements could have still offered a net positive step toward more effective oversight of high-risk AI systems.”
Omaar writes, “But this is not a federal law. It is a state statute, and that changes the calculus entirely. No matter how measured or innovation-friendly the regulatory approach may appear in isolation, its merits collapse when applied through a single state because the law guarantees inconsistency across the country.“
She concludes, “California’s ideas could strengthen global AI safety, but only if they are carried through a national framework. If Democrats want that to happen, they should resist the lure of short-term state wins that make a federal deal harder to reach. Republicans, for their part, should resist the reflex to dismiss the merits of these ideas, many of which echo their own calls to regulate realized rather than hypothetical harms.”
Omaar says, “The key to success -- for both innovation and safety -- is lifting good ideas from both sides of the aisle and anchoring them in a bipartisan federal framework.”
CDI is part of the nonprofit, industry funded Information Technology and Innovation Foundation.