Inside AI Policy

April 29, 2025

Home Page

Home Page

By Mariam Baksh

Amid skepticism from allies, the National Fair Housing Alliance is looking to build on what it sees as a surprising bright side in the Office of Management and Budget’s recently issued guidance on federal agencies’ use of artificial intelligence.

By Charlie Mitchell

A major defense industry group says the U.S. government should make extensive use of artificial intelligence tools to improve its own operations, create a repository of high-quality data for developers, and find and eliminate barriers to AI adoption, among its recommendations for the upcoming White House AI action plan.

By Rick Weber

The Federal Trade Commission unanimously approved an enforcement order against AI detection firm Workado for failing to demonstrate the accuracy of its claims about products and services, a move signaling a willingness by the commission’s new Republican majority to hold AI companies accountable.

By Rick Weber

Plaintiffs are attempting to exploit a recent court-ordered consolidation of cases to expand the scope of copyright infringement allegations to include a new class of large-language AI models, according to an OpenAI filing with a federal district court in opposition to an amended complaint in the landmark lawsuit.

By Rick Weber

The National Science Foundation on behalf of the White House Office of Science and Technology Policy is seeking industry and public input on revising the Biden administration’s AI research strategy, with the goal of filling gaps in private sector research and implementing President Trump’s executive order for eliminating barriers to U.S. “dominance” in developing artificial intelligence technologies.

By Rick Weber

A group of House Democrats is still waiting for the Office of Management and Budget to report back on the handling of sensitive data by the so-called Department of Government Efficiency, framing a potential showdown with the White House over the Trump administration’s agenda for widespread deployment of artificial intelligence.

President Trump’s new executive order on artificial intelligence uses at the Department of Education appears to align with the administration’s broader strategy for rapid deployment of the technology, as the department faces massive staffing cuts and a possible shutdown under the government downsizing work led by the so-called Department of Government Efficiency.

The Center for Data Innovation is urging the U.S. Consumer Product Safety Commission to go beyond modest artificial intelligence use cases and embrace AI as an essential component of a modern enforcement agency facing serious challenges related to China.

An analysis by the International Association of Privacy Professionals flags challenges for implementing the Justice Department’s bulk data protection rules, which went into effect earlier this month, even as DOJ offered some relief in guidance that extended the compliance deadline to July 8.

The bipartisan leaders of the House select committee on China have set a deadline next week for NVIDIA CEO Jensen Huang to respond to allegations that China’s DeepSeek artificial intelligence model relies on NVIDIA computer chips, despite U.S. export controls blocking such sales, responses that are likely to inform potential congressional and Trump administration actions to counter Beijing’s AI ambitions.

Republican leaders of a House Energy and Commerce subcommittee are demanding details on the training data for China-based DeepSeek’s generative AI model, raising privacy and proprietary concerns about Americans’ data and adding to a growing chorus of congressional inquires into the national security implications of the open-source chatbot released earlier this year.

Montana Gov. Greg Gianforte (R) has signed legislation into law that enshrines a right to compute in the state’s constitution, spurring plaudits from free-market groups which succeeded in pulling a controversial shut-down mechanism from its requirements for AI developers.

Policy analysts from across the political spectrum briefed congressional staffers on the prospects for online safety legislation, citing the complicating role that artificial intelligence plays in moderating content amid discussions about government censorship and possibly modifying a longstanding liability waiver for platform companies.

Stakeholders issuing muted calls for federal pre-emption of state artificial intelligence laws acknowledge the long odds of getting such a proposal through Congress, but some are hopeful that the upcoming “AI action plan” from the Trump White House will help address their underlying concerns about burdensome state requirements while kick-starting voluntary standards development work.

A posting by the Center for Data Innovation offers three recommendations for pushing AI adoption across the federal bureaucracy, saying recent Office of Management and Budget memos on artificial intelligence are an important step but more guidance is needed to spur action.

The American Enterprise Institute says federal officials should pursue a broad strategy of countering artificial intelligence “overregulation,” including reviews of existing AI-related rules and initiatives and implementation of AI playbooks at agencies that ensure a focus on innovation, in comments to the White House on an emerging AI “action plan” for the Trump administration.

The Center for Democracy and Technology is calling out as irresponsible a suggestion that the evaluation of AI models should be conducted in classified environments amid an evolving policy scape for the facilitation of third-party red teamers testing the technology.

President Trump has issued a memorandum directing federal agency heads to work with the White House Council on Environmental Quality on the use of advanced technologies to digitize and streamline the environmental permitting process, building on initiatives by the Biden administration as well as recent Trump actions promoting the use of artificial intelligence.

Insurance industry specialists generally have yet to see demand for AI-specific policies with a few exceptions, including for an insurance product that guarantees the ongoing technical performance of models, given the “black box” nature of the technology.

The Consumer Technology Association is floating a proposal to lawmakers that already appears unpopular with both copyright holders and AI developers who want the ability to keep training their models on content legally protected as intellectual property.

The liberal policy group Center for American Progress says state consumer protections around artificial intelligence must be preserved in the federal privacy framework that House Energy and Commerce Republicans are beginning to develop, even as pre-emption of states on both privacy and AI policy is the priority for industry groups weighing in with the committee.

Federal pre-emption, a central issue in seemingly endless debate over a national privacy law, has extended into the AI policy realm with calls for a legislative solution focused explicitly on artificial intelligence, but Congress lacks a clear path -- and perhaps the appetite -- for undertaking such an effort.

The Information Technology and Innovation Foundation has laid out a pro-growth strategy for Canada’s tech industry which includes leveraging existing public safety regulations to promote the use of artificial intelligence.

The influential Special Competitive Studies Project has released a report that offers a three-phase strategy for integrating artificial intelligence into U.S. intelligence gathering operations, arguing for large-scale investments and warning that failure to act threatens national security.

The Center for Strategic and International Studies has run China’s DeepSeek AI model through a benchmark it created to assess large language models for biases in coming to foreign policy decisions, and found the results to be part of a concerning pattern of certain models recommending escalation in reaction to international disputes.

Security analysts at the influential Special Competitive Studies Project say the upcoming expiration of a landmark agreement with Israel on military cooperation and financing offers an opportunity for the Trump administration to address emerging threats and opportunities related to artificial intelligence, including building on the Abraham Accords signed in Trump’s first administration.

Meta Platforms is telling a federal district court that the public benefits of its large-language generative artificial intelligence outweigh the claims by plaintiffs that training of the AI model violated copyright protections, in a case that Meta says will determine the future of genAI technologies and its open-source Llama model.

Class-action plaintiffs in a landmark copyright infringement lawsuit against Meta are telling a federal district court that approving how the tech giant trained its large-language artificial intelligence model would encourage further “piracy” of published works, setting the stage for a ruling on cross motions for summary judgment.

A group of copyright law professors is urging a federal district to reject claims by tech giant Meta that its training of artificial intelligence models constitutes “fair use” because it offers new, or transformed, content based on published works by the authors who are suing the company.

OpenAI is asking a federal district court for a jury trial on allegations by Elon Musk that the company says are part of a campaign by Musk to squelch investments in artificial intelligence that are beyond the control of his private ventures, to the detriment of “public interest.”

The Mozilla Foundation finds “substantial” engagement by companies with open-source “models and tools,” in a new survey-based report on how stakeholders are using non-proprietary software in the technology stack and advocating for policies that support its use.

A detailed report from the R Street Institute spells out policy approaches for secure development and deployment of open-source artificial intelligence systems, which the free-market think tank says is “indispensable” to U.S. leadership in global tech competition.

The Center for Democracy and Technology outlined technical reasons why the Trump administration’s use of artificial intelligence to monitor social media in targeting noncitizens for removal from the country will be ineffective, in addition to furthering potential violations of the First Amendment.

Research featured in an annual index issued by Stanford University’s Institute for Human-centered AI suggests increasing the amount of data a model is trained on won’t necessarily address implicit biases in its outputs, stressing a need for transparency in the process.