Inside AI Policy

November 6, 2025

AI Daily News

ARI calls for transparency standards that keep AI ‘start-ups’ competitive

By Mariam Baksh / January 20, 2025

A report from Americans for Responsible Innovation seeks to galvanize political will toward advancing a federal transparency policy that accounts for companies whose sole business is the development of artificial intelligence, compared to established titans with other revenue streams which the group suggests are more able to share details of their products.

The report comes as a bipartisan AI task force in the House recommends Congress work to improve transparency and incentivize open-source development across the industry, and as ARI views Elon Musk’s relationship with President-elect Trump as a positive indication for prospects of disclosure efforts.

"There's a real opportunity in 2025 to bring together a number of constituencies in support of AI transparency policies, including both Republicans and Democrats, and advocates and industry,” ARI spokesperson Chris MacKenzie told Inside AI Policy. “Increasing transparency and trust in AI systems would be fundamentally good for business and good for consumers. Our transparency report is a resource to help inform that policymaking."

The report released Jan. 10 ranks seven AI frontier models -- GPT-4, GPT-4o, o1 Preview, Llama 3.2, Gemini 1.5, Claude 3/3.5 Sonnet and Grok-2 -- according to 21 metrics including training data composition, changes from previous distinct models, security and environmental impact. It promotes a need for evaluation standards while trying to undo some of how transparency and openness is currently perceived by policymakers.

ARI is a lobby group that receives significant funding from Open Philanthropy, a foundation led by Facebook co-founder Dustin Maskovitz who was an early investor in Anthropic, which develops Claude.

As with those from Open AI -- the GPT and o1 Preview models -- Claude uses an Application Programing Interface to serve end users while hiding the weights or parameters shaping its outputs. Gemini from Google and Grok from Musk’s xAI are also “closed-weight” models, whereas Meta has made its Llama model “open-weight,” allowing it to be downloaded and fine-tuned locally, as opposed to through a cloud-hosted API.

“We really want Trump to listen to Musk’s words,” MacKenzie told Inside AI Policy, “When he talks about AI he talks a lot about [the importance of] transparency.”

Despite this though, Grok-2 received the lowest overall score in the ARI report, with a 0 out of 4 on most of the metrics.

One way ARI looks to explain this in its report is by classifying xAI, along with OpenAI and Anthropic as “startups,” in comparison to Meta -- whose Llama got the report’s highest transparency rating -- and Google, whose Gemini came in second, albeit by a wide margin.

This study “exposes a stark divide in industry practices: established technology companies with diverse revenue streams maintain higher transparency, while new AI startups trend toward opacity,” the report reads. “Our transparency scoring system shows Meta's open-weight Llama 3.2 (88.9/100) and Google's Gemini 1.5 (62.5/100) leading in disclosure practices, while newer models from startup firms like OpenAI's o1 Preview (44.7/100) and xAI's Grok-2 (19.4/100) are significantly more opaque.”

But there are many other examples of open-source models developed by firms exclusively devoted to AI development. And the report doesn’t note the massive investments OpenAI and Anthropic have received, including from major tech companies like Microsoft, Google and AWS.

Acknowledging the limitations of the report’s small sample size, ARI posits a lack of administrative resources could be behind the comparative opacity of the “startups” it examined.

“We do not purport to know for certain that this trend would hold if more models were included or why it might be the case,” the group wrote while noting the “pattern raises important questions about whether an AI company’s financial position impacts its ability and desire to be transparent. If a link is identified, it would open further questions about how to align commercial incentives with the public interest when it comes to AI transparency, particularly as these systems become further integrated into society.”

The report marks ARI’s opposition to open-weight models in a footnote taking issue with Meta’s specific association with transparency.

“Google and Meta have both been accused of lagging on capabilities, but their thriving digital advertising businesses could mean that they can choose to compete instead on transparency, the report reads. “Meta in particular has argued forcefully in favor of openness, partially because, as CEO Mark Zuckerberg noted, ‘Selling access to AI models isn’t our business model.’”

The footnote appending the Zuckerberg quote reads: “while Meta has been effectively transparent about many aspects of their Llama models, its choice to release Llama’s model weights publicly comes with significant risks. Researchers affiliated with the People’s Liberation Army have already used Meta’s models to develop an AI tool for the Chinese military. This patently harmful result further emphasizes the importance of separating principles like transparency from practices like the unrestricted release of model weights.”

In another footnote about the need for transparency to facilitate government oversight, the ARI report emphasizes that disclosures can be sufficiently made through APIs -- in developers own descriptions of their training data, as opposed to allowing auditors unfettered access to their training datasets or processes, for example.

“Importantly, this type of transparency does not require the open release of model weights,” the footnote reads. “An analogy here would be APIs. Developers can understand APIs through clear documentation, performance characteristics, and technical specifications, but don’t have access to the underlying technical implementation.”

In summary, ARI said its findings “highlight the urgent need for robust, standardized disclosures to bolster transparency -- disclosures that balance meaningful oversight with competitive innovation.”

“We conclude by broadly calling for industry standards and legislative frameworks that could address this transparency deficit without inadvertently consolidating power among large tech incumbents,” the report reads.