Inside AI Policy

October 31, 2025

AI Daily News

R Street: Cruz amendment a sign of things to come after Biden EO politicized AI policy

By Mariam Baksh / November 19, 2024

Legislation proposed by Sen. Ted Cruz (R-TX) -- criticized by some as a controversial attack on foundational civil rights law -- was a proportional reaction to President Biden’s unprecedented executive order on artificial intelligence, according to a prominent free-market advocate in the tech space who is welcoming the EO’s impending reversal.

“The Cruz amendment pushed back on the over-zealous approach to regulation favored by the Biden administration, which they pushed unilaterally without waiting for Congress to act,” Adam Thierer, senior fellow for technology and innovation at the R Street Institute, told Inside AI Policy. “As soon as President Biden issued the longest executive order in American history and took this more aggressive regulatory approach, he politicized AI policy to the core.”

The amendment, which is regarded as “anti-woke,” is at the center of a pitched battle on AI policy that can be reduced to whether policymakers should incentivize developers and deployers proactively addressing biases in AI systems, given the scale at which they are increasingly being integrated into society, or continue to rely on litigation to address harms.

Biden’s EO 14110 generally required agencies to examine their prospective use of AI for impacts to individual rights and safety and to establish mitigation plans or forgo procuring the technology.

The EO also included a section on “advancing equity and civil rights” where agencies like the Consumer Financial Protection Bureau were “encouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and: (i) evaluate their underwriting models for bias or disparities affecting protected groups; and (ii) evaluate automated collateral-valuation and appraisal processes in ways that minimize bias.”

Thierer said “discrimination and consumer harms are already flatly illegal under a host of federal and state statutes. If it can be shown that AI developers are engaged in activities that violate civil rights or other consumer protections, many regulations and court-based remedies exist to address those harms.”

“These harms must be proven under time-tested legal procedures,” he said, arguing the supremacy of a system where “we treat innovators as innocent until proven guilty.”

The Cruz amendment, which passed the Senate Commerce Committee over the summer attached to legislation to codify the AI Safety Institute at the National Institute of Standards and Technology, would prevent agencies from issuing even non-binding guidance on anything suggesting “Artificial intelligence, algorithms, or other automated systems should be designed in an equitable way that prevents disparate impacts based on a protected class or other societal classification.”

Civil rights groups were aghast, saying the amendment didn’t just gut the Biden EO, but -- by raising the term “disparate impacts,” -- "threatens the enforcement of civil rights law across the entire federal government, across numerous pre-existing legal regimes.”

The Cruz amendment reads like “he’s trying to kneecap [the Equal Credit Opportunity Act],” Adam Rust, director of financial services at the Consumer Federation of America, told Inside AI Policy.

Major civil rights laws like ECOA didn’t originally mention “disparate impact” as a standard for which plaintiffs can file discrimination lawsuits. “Disparate impact theory” emerged as legal doctrine after a 1971 case -- Griggs v. Duke Power Co. -- created a precedent where, even if criteria for a decision was apparently neutral, entities could be liable for discrimination if the result especially adversely impacted a protected group. Then, in 1991, Congress amended the Civil Rights Act, acknowledging the disparate impact standard in clarifying details about who owned the burden of proof in such cases.

Authorized by Congress through the Dodd-Frank Act, CFPB had already issued regulations -- regulation B -- which incorporated the disparate impact standard into their interpretation of ECOA.

“Disparate impact occurs when a creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact,” the agency wrote in 2013.

Empowered by the Biden EO on AI, groups like the CFA, in coordination with industry stakeholders and academia, appealed to CFPB for further guidance on how they might implement less discriminatory algorithms, or LDAs, including by incorporating data that is more relevant, as opposed to relying on data that may be tainted by issues like historic redlining activities.

“It's necessary to use disparate impacts as a means to ensure safety. If you focus only on disparate treatment, you permit unfairness through unawareness,” Rust said. “Requiring lenders search for better alternatives, given data that's come out of testing for disparate outcomes, is the most sensible way to go about this process.”

CFPB refrained from issuing the written guidance, even though they had been making verbal statements on the importance of creditors doing their due diligence in looking for LDAs.

Then, in August, CFPB made a forceful comment to the Treasury Department which referenced provisions that are enshrined in civil rights statutes. Laws like ECOA require that entities notify applicants of adverse decisions, providing specific reasons why they were declined for a loan, or denied insurance, for example.

“If firms cannot manage using a new technology in a lawful way, then they should not use the technology,” CFPB wrote, specifically referencing the industry’s use of “black-box” models, where often even the deployers don’t know how outputs are made.

Rust agrees, “models need to be interpretable and explainable.” But the Cruz amendment would also prevent agencies from requiring any transparency measures to examine input data used to feed AI models if the intent is to modify them to prevent disparate impacts.

The amendment -- which is now a touchstone for advocates of accelerating AI who are supported by Elon Musk, the tech mogul advising President-elect Trump -- was the natural backlash that could be expected after the Biden EO broached the issue, Thierer said.

The EO “was a calculated gamble that resulted in enormous pushback and led to the only promise on tech policy made in the GOP platform: To abolish that executive order as soon as the next administration takes office,” he said.

Disintegrating the disparate impact standard is also part of the controversial Project 2025 agenda, which Trump has distanced himself from, but which was informed by senior members of his incoming cabinet.