Inside AI Policy

April 9, 2026

AI Daily News

Consumer advocates reject Utah legislation as model for AI policy

By Mariam Baksh / May 6, 2025

Legislation set to take effect in Utah this week has been touted by industry-aligned groups as a model for Congress to govern artificial intelligence, but consumer advocates say the coming updated policy is even more permissive than its original and that adoption of its provisions would likely lead to more harm than good.

“[SB 226] should not in any way be a model for federal or other state AI legislation,” Ben Winters, director of AI and data privacy for the Consumer Federation of America, told Inside AI Policy.

Winters was referring to legislation which, starting May 7, will narrow the scope of disclosure requirements currently in force for companies providing artificial intelligence services to residents of Utah.

“Regulation mitigation” provisions described by the bill’s proponents as a way to balance innovation and consumer protection have generated support from centrist think tanks like the Aspen Policy Institute, researchers from which are recommending greater transparency and consistency to improve public trust in its implementation.

The consumer advocates see a dangerous potential for companies to trade non-meaningful transparency for immunity from liability. In Utah, companies can apply for and negotiate “regulatory relief” through the state’s Office of Artificial Intelligence Policy, which the original bill established. The bill is in the model of template legislation being promoted within state legislatures by the conservative American Legislative Exchange Council.

Under the initial Utah bill -- SB 149, which libertarian groups like the Koch-backed Abundance Policy Institute and the market-based R Street Institute have testified should be replicated by federal lawmakers -- covered entities must reveal that their users are interacting with artificial intelligence and not a real person.

SB 149, which was enacted in May 2024, further specified the disclosure should occur in “a conspicuous statement written in dark bold with at least 12-point type on the first page of the purchase documentation.”

The new law will rein in the proactive disclosure requirements, making them instead only applicable to “high-risk” AI interactions such as those in financial, legal, medical, mental health or other services to be determined by a subsequent rulemaking. It also gives those entities more leeway to work with the state’s AI policy office to determine what form the disclosures can take.

And for areas not deemed high-risk, the new bill now only requires the AI service providers to disclose the nature of their products if users explicitly ask.

But that only scratches the surface of consumer advocates’ overall issue with the Utah legislation.

“The Utah bill essentially created a regulatory sandbox for companies. This is certainly not what EPIC would call sensible regulation,” Kara Williams, a law fellow focusing on state privacy and AI policy at the Electronic Privacy Information Center, told Inside AI Policy.

“Nor would the type of stronger AI legislation we do advocate for stifle innovation, although I know that's a common industry talking point,” Williams said.

CFA’s Winters said the kind of disclosures called for in the Utah legislation hardly amount to “meaningful” transparency that would allow users to hold companies accountable when AI is involved in life altering decisions.

It’s possible, instead, he said, to “require across the board, proactive, transparency about what is going on in that system, so that people, if rejected, would be able to actually investigate and figure out what sort of data is even being used in a system that determines whether my health insurance claim should be processed.”

But a collection of bills calling -- to various degrees -- for transparency approaching that level has also spurred concerns among consumer advocates due to accompanying elimination -- to various degrees -- of the private right of action. Those bills, such as one recently vetoed in Virginia, generally leave individuals subject to the whim and resources of the Attorney General’s office, the consumer advocates say, while also objecting to other loopholes.

While Winters mentioned other transparency bills that have been proposed such as HB 60 in New Mexico as a more viable option, he said CFA is also working to proactively craft legislation that “would actually help consumers.”

Meanwhile, he said, “when laws like [the Utah bill] come up we will try to educate the legislatures considering them on how -- worse than useless -- actively harmful,” they are likely to be.

The Aspen Institute researchers note “A cornerstone of the [Utah AI Policy] Office’s leading work is its AI Learning Lab -- a regulatory sandbox for Utah-based AI companies and industry stakeholders to study AI solutions.”

“However, the Learning Lab’s work remains opaque to members of the public, affecting trust, limiting engagement, and risking the exclusion of diverse stakeholders,” the said.

They recommend: “Sharing (1) the OAIP’s evaluation framework for Learning Lab partner AI initiatives and (2) a running list of those partners would ensure that the Learning Lab’s work is aligned with the OAIP’s values of increasing trust in AI activities and balancing innovation and compliance.”