Participants in a Brookings Institution effort to consider how government can oversee equitable implementation of artificial intelligence are encouraging incoming Trump administration policymakers to think beyond harm reduction to democratizing control over how the technology is developed and deployed.
“Today is both an exciting day and equally tough, because we're somewhat uncertain as to how the next administration will potentially value equity and how we create and execute AI models that embolden appropriate opportunities while protecting people from consequential harms,” said Nicole Turner-Lee, adding, “At the end of the day, there are people in their communities who are commoditized, in many respects, by AI models without agency or repair.”
Turner-Lee directs Brookings’ Center for Technology Innovation and was leading a Dec. 9 panel event to announce a series of upcoming reports to be released by the center’s AI Equity Lab.
Over the last two years the lab has been convening AI stakeholders, including those from the National Institute of Standards and Technology and officials from other government agencies like the Department of Education along with industry and civil society groups from sectors like journalism and health care, to explore practical approaches to the related issues.
The event aimed to turn the dial down on pitched political debates typically marked by discussion of “woke” policy proposals and focus attention on the foundational elements that would allow for equitable AI.
Those elements might implicate approaches that would regulate AI by use case -- similar to a Texas GOP bill that would outright ban facial recognition technology -- or require transparency of underlying training data, according to participants in working groups at the AI Equity Lab who spoke at the event.
The incoming Trump administration has promised to repeal President Biden’s 2023 executive order on artificial intelligence, which generally called for agencies to scrutinize their use of AI with an eye toward equity. But there are members of his administration, most notably Elon Musk, who have expressed interest in regulating the technology, and political dynamics in Congress are also quite fluid. Turner-Lee struck a conciliatory tone.
“Now I want to make sure that I'm going down your lane when I say ‘equity,’” Turner-Lee said, so “that you recognize that I'm not sharing it as a politicized concept, nor should it be for anyone who actually speaks this word.”
She added, “I know we're in the backdrop of a conversation around banning books, removing [Diversity, Equity and Inclusion], suggesting that racial representation in military and corporations run counter to the American dream of meritocracy, but friends, as a sociologist, I know that we have a lot to do to catch up and ensure that the bits and bytes that dominate everyday conversations include just about everyone, and the people right now who are training AI systems have been commoditized.”
Referring to stakeholders like the New York Times, who are actively fighting AI developers in court over the use of their work in training AI models, Turner-Lee said: “If you don't own a copyright or trademark or patent, you are a subject of the technology.”
“What about the people who can’t afford to sue?” asked Courtney Radsch, who is a non-resident fellow at the Brookings’ CTI and also directs the Center for Journalism and Liberty at the Open Markets Institute.
“It seems like we're talking about journalism, but we're actually talking about is the very foundation of artificial intelligence and all of these different sociotechnical and political economic issues that we're dealing with,” Radsch said.
Radsch argued that embracing agency should mean going beyond just “harm-reduction” when weighing how AI systems are designed to deciding collaboratively, whether certain use cases should be deployed in the first place, given the potential to deter political gatherings or religious freedom in the context of surveillance technology, for example.
At the same time, the participants noted improving AI is often also about collecting enough and appropriate data from communities, so that a certain amount of “surveillance” might be helpful in some areas, such as those aiming to develop more inclusive medical diagnosis technology.
“This is the tradeoff that we see,” Radsch said pointing to the need for policymakers to more deliberately approach the issue and emphasizing the importance of training data.
She cited the popular example of a Google Gemini model which is a favorite punchline of conservative “anti-woke” AI advocates to drive home the point. In trying to address negative stereotypes and instead producing outputs -- images of Black Nazis and American founding fathers -- that “do not purport with reality,” Radsch said, the example reveals “this tension between trying to address the fact that the data inputs reflect the existing biases of the data.
“You can see that part of [the solution] is to have more transparency into data sources, both in terms of the training model and in terms of the inference data,” she said. “I think that that awareness, literacy about those issues, is the first step.”
All the other panelists, in unison, said, “I agree.”
