Inside AI Policy

October 31, 2025

AI Daily News

Civil rights group offers mindful praise for Trump AI procurement memo

By Mariam Baksh / April 29, 2025

Amid skepticism from allies, the National Fair Housing Alliance is looking to build on what it sees as a surprising bright side in the Office of Management and Budget’s recently issued guidance on federal agencies’ use of artificial intelligence.

“We’re having internal discussions because we actually like some of what we’re seeing out of the Trump administration,” Michael Akinwumi, NFHA’s Chief AI Officer and Rita Allen Fellow, told Inside AI Policy, during a break at the group’s Responsible AI symposium on April 28.

He added that NFHA is using AI to analyze more than 10,000 public comments the White House recently published to shape a new AI action plan in looking for potential allies based on takeaways from the OMB guidance.

Akinwumi leads NFHA’s AI standards development work with the National Institute of Standards and Technology’s AI Safety Institute, which at least one other civil society group has stepped away from in the wake of the Trump administration’s changes to the mission of a related consortium of public- and private-sector entities.

The NFHA symposium continues through April 30 with discussions focused on appropriate AI governance, including with representatives from government and the fintech industry.

“There's a lot in there that we’re still trying to, unpack,” Akinwumi said specifically referring to OMB’s April 3 memo -- M-25-21 -- on “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust.” But reading through it, he said NFHA was struck by the language retaining instructions for agencies to consider the impact on civil rights, civil liberties, and privacy in conducting assessments of “high-impact” AI use cases.

The April 3 memo replaced one issued under the Biden administration which stemmed from the now repealed October 2023 executive order on AI.

“I was thinking, that’s really a good thing, are we sure this is coming from this administration? The administration that has attacked [diversity, equity and inclusion], civil rights … they published this?” Akinwumi said.

His comments on the memo relate to a belief that it might lead agencies to employ the disparate-impact theory established by civil rights law but that run counter to a more recent April 23 executive order the administration issued on “restoring opportunity and meritocracy.”

Akinwumi noted that, among other things, the OMB memo specifically defines “high-impact AI” as “AI with an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on … an individual or entity's access to education, housing, insurance, credit, employment, and other programs.”

“What it means is that, when you think about impact … the Fair Housing Act is a perfect setting for high impact AI applications, because it's looking at outcome … it's also looking at the performance. And we already have disparate impact theory,” he said. “So what that means is, if you're using AI for credit scoring, you have to use disparate impact.”

He said, “essentially any impact assessment is looking at the outcome, and then you also need to make sure that the outcome, in terms of performance, is equitable across all protected classes, right? ... That’s really one of the things that we like about it.”

But therein lies a crucial assumption. Disparate impact theory -- which has been established as doctrine through Congress and the courts for use by private litigants, according to the law firm Holland and Knight, means that “entities may be held liable for a particular employment practice or policy that, although facially neutral, results in a significant adverse effect on a protected group (such as a race) and cannot be justified as serving a legitimate business purpose for the employer or entity.”

In the context of AI, “anti-woke” evangelists like Sen. Ted Cruz (R-TX) have argued that entities and their systems should only be liable for any disparate treatment of protected classes, even if the outputs of AI appear skewed or biased against those groups.

The debate has led to broad appeals for input-data and algorithmic transparency across public interest groups. With such measures, disparate treatment might be more effectively examined. But without addressing that detail, the April 23 executive order on restoring meritocracy bars federal agencies from applying disparate impact liability.

Adam Rust, director of financial services at the Consumer Federation of America told Inside AI Policy, “CFA is definitely concerned about the disparate impact language in that executive order.”

CFA’s Rust and NFHA, along with banks and other financial institutions, have both appealed for guidance from agencies like the Consumer Financial Protection Bureau on the use of less discriminatory alternatives that they say AI could help facilitate, for fairer, more inclusive lending. The technology could be used in incorporating and balancing more diverse sources of data, they say, instead of relying on proxies for race which are more likely to be included in a traditional credit score and could lead to biased outputs and recommendations for loan approvals, for example.

And while the OMB memos technically have limited direct bearing on individuals’ access to housing or other benefits, they represent a broader governance model that public interest advocates can see driving similar structures or policies for the private sector which stakeholders say is subject to inertia and risk avoidance, even if the use of LDAs are to their ultimate benefit.

“Since it's the federal government, it is likely to then be picked up as a standard at state and local governments, maybe corporations … it is this thing that has not been really put forward by a lot of governments or big entities in concrete ways, about how you should do it,” Ben Winters, CFA’s director of AI and data privacy, who is more familiar with the OMB memo told Inside AI Policy. “That's the bigger ripple effect than just the way the federal government uses AI.”

But although Winters, much like the Center for Democracy and Technology, similarly noted the memo was “not as bad as we thought it was going to be,” he was less hopeful than Akinwumi about its implications for civil rights, noting differences between the Trump- and Biden-issued OMB memos may come down to semantics amid broader shared goals.

Both documents call for assessments, although Biden’s OMB instructed agencies to conduct risk assessments looking for impacts on rights and safety, while Trump’s OMB calls for impact assessments toward managing risk.

“While I think that the requirement to do assessments on performance is important, I don't share the same optimism that it will lead to a sort of retention or renewed focus on civil rights, or help really increase the amount to which all AI systems are assessed and mitigated for civil rights impact,” Winters said.

He added, “Any sort of assessment structure that is like this will sometimes lead to ethics or audit washing … people will say ‘we're doing all this disclosure, we're studying all these things’ … [but] it sort of takes away the role of investigation and enforcement in an ongoing way.”

Enforcement, Winters, said, certainly doesn’t seem like it will be a priority, under the Trump administration.

“What they have prioritized, and this is both administrations really, is adopting AI, period, he said, “not necessarily doing it in any particular way.”