Inside AI Policy

October 31, 2025

AI Daily News

Consumer group says ‘human-in-the-loop,’ trade secret exemptions spoil state AI bills

By Mariam Baksh / January 28, 2025

A centrally coordinated working group of state legislators from around the country is promulgating legislation to govern artificial intelligence that could be rendered useless or even harm consumers due to loopholes regarding human involvement in decision making and exceptions in transparency requirements, according to a leading consumer group.

“The trade secret standard should not be followed too aggressively and end up taking over this bill,” Ben Winters, director of AI and data transparency for the Consumer Federation of America, said Jan. 27 during the first public meeting held by the working group to collect stakeholder feedback.

Winters added, “It’s really easy to just say, ‘yeah, it's a trade secret, or yeah, it's not the whole substantial factor in a decision, we have a human in the loop.’ And then this bill that you spend so much time working on, gets a lot less protective for folks.”

Starting in October, the working group has been facilitated by the Future of Privacy Forum, a non-profit convener on tech policy issues. It has produced legislation in states -- including Colorado, Texas and Virginia -- that has rattled industry-backed think tanks by generally requiring developers and deployers of the technology to, respectively, disclose details about their training data and processes and to conduct impact assessments for how it will be applied.

Supported by academic literature and agencies like the National Institute of Standards and Technology, the expressed rationale behind the idea is that AI’s propensity for biased outcomes is a result of the data it’s trained on.

But the template legislation lawmakers are building on includes language exempting the disclosure of trade secrets, a feature groups like TechNet and the Software and Information Industry Association aimed to ensure remains in place during this week’s event, saying developers shouldn’t be forced to reveal their “intellectual property.”

And the legislation also doesn’t apply in cases where, to varying degrees, a human is involved in making decisions in consequential areas, like employment, loan approval or healthcare benefits. That “human-in-the-loop loophole” has also been identified in federal legislation proposed last Congress by Sens. Amy Klobuchar (D-MN) and John Thune (R-SD), now majority leader.

Members of the working group are finetuning how to define terms like “substantial factor” [in AI decision making] in a new model “high-risk AI framework” which is based on legislation introduced by state Sen. James Maroney, a Democrat from Connecticut who initiated the group’s efforts with input from the human resources giant Workday.

During the event, Maroney engaged on the definitions with Workday, which is being sued for discrimination in association with its artificial intelligence software and also addressed the state lawmakers.

“Thank you very much for your presentation,” Maroney said in response to remarks from Evangelos Razis, Workday’s senior manager of public policy. “So I was kind of workshopping the new definition of ‘substantial factor’ in this draft … around human-involved decision making, so obviously we still need to work on that definition with you.”

Referencing a bill which received a hearing in the commonwealth Jan. 27, he added: “Do you point to the [definition] in HB 2094 in Virginia as a good definition of ‘substantial factor?...Are we looking at … changing ‘consequential?’ I also was trying to workshop ‘consequential decisions,’ … to narrow and provide more clarity.”

Razis confirmed, “we're quite happy with the Virginia definition … the definition used in Virginia incorporates a concept called ‘principal basis.’ … We think that captures, again, that distinction between fully automated decisions and those with a human in the loop. Because, no doubt, there are situations where a human is sort of clicking through, for example, and may not be giving meaningful considerations. And we think that language in Virginia captures those sorts of scenarios.”

But the consumer federation’s Winters said “the human-in-the-loop versus totally automated decision [making] is not enough of a clear distinction, and the requirements should [apply] when [AI is] a factor at all.”

Winters also noted overarching problems with liability shields included in the model legislation, which provides a “rebuttal presumption” or “affirmative defense” -- depending on the iteration -- for would-be defendants if they comply with the bill’s requirements.

“Companies should do impact assessments so they can make improvements if they identify issues with their applications, not so they can avoid accountability outright,” he told Inside AI Policy.