May 16, 2025
Stanford spotlights call for training data transparency after scaling fails to fix bias
Research featured in an annual index issued by Stanford University’s Institute for Human-centered AI suggests increasing the amount of data a model is trained on won’t necessarily address implicit biases in its outputs, stressing a need for transparency in the process.
In the “Fairness and Bias” section of a chapter discussing “Responsible AI,” the 2025 index, released April 8, describes the impact of racial classification in multimodal models and the measurement of implicit bias in explicitly unbiased LLMs. It...