Inside AI Policy

May 16, 2025

Stanford spotlights call for training data transparency after scaling fails to fix bias

By Mariam Baksh / April 16, 2025

Research featured in an annual index issued by Stanford University’s Institute for Human-centered AI suggests increasing the amount of data a model is trained on won’t necessarily address implicit biases in its outputs, stressing a need for transparency in the process.

In the “Fairness and Bias” section of a chapter discussing “Responsible AI,” the 2025 index, released April 8, describes the impact of racial classification in multimodal models and the measurement of implicit bias in explicitly unbiased LLMs. It...


Log in to access this content.


Not a subscriber? Sign up for 30 days free access to exclusive news and analysis on artificial intelligence regulations and more.