Inside AI Policy

September 16, 2025

AI Daily News

Future of Life Institute gives low marks for safety efforts by leading AI firms

By Charlie Mitchell / July 18, 2025

The new “AI Safety Index” from the Future of Life Institute assesses the safety practices of seven leading artificial intelligence developers and finds their efforts to be lacking across “six core dimensions.”

The six areas examined by FLI are risk assessment, current harms, safety frameworks, existential safety, governance, and information sharing.

Anthropic scores an overall C+, the top mark, followed by the C earned by OpenAI and the C- that Google DeepMind collects. Following the leaders are xAI and Meta, which each get a D, and Chinese firms Zhipu AI and DeepSeek, which each get an F.

Among its key findings, FLI says in the July 17 report:

  • Anthropic gets the best overall grade (C+). The firm led on risk assessments, conducting the only human participant bio-risk trials, excelled in privacy by not training on user data, conducted world-leading alignment research, delivered strong safety benchmark performance, and demonstrated governance commitment through its Public Benefit Corporation structure and proactive risk communication.
  • OpenAI secured second place ahead of Google DeepMind. OpenAI distinguished itself as the only company to publish its whistleblowing policy, outlined a more robust risk management approach in its safety framework, and assessed risks on pre-mitigation models. The company also shared more details on external model evaluations, provided a detailed model specification, regularly disclosed instances of malicious misuse, and engaged comprehensively with the AI Safety Index survey.
  • The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in Existential Safety planning. One reviewer called this disconnect “deeply disturbing,” noting that despite racing toward human-level AI, “none of the companies has anything like a coherent, actionable plan” for ensuring such systems remain safe and controllable.

"These findings reveal that self-regulation simply isn’t working, and that the only solution is legally binding safety standards like we have for medicine, food and airplanes. It’s pretty crazy that companies still oppose regulation while claiming they’re just years away from superintelligence,” said FLI president Max Tegmark, who is an MIT professor.

Max Tegmark

Max Tegmark, President, Future of Life Institute

FLI flagged a new report from the ratings group SaferAI that also gave top AI companies poor grades. “All companies currently have weak to very weak risk management practices,” SaferAI said.

“Despite distinct methodologies and being compiled independently, both new reports found that none of the frontier AI tech companies are taking safety seriously, and many are ignoring the notion of safety altogether,” an FLI spokesperson said.

“Each report assesses a distinct but complementary set of criteria and offers a unique perspective on how major AI developers are (or are not) approaching the question of safety as the technology rapidly grows in power and capability. SaferAI’s rating is more in-depth but narrowly focused on risk management while FLI’s rating is an all-encompassing expert-based assessment of companies’ behavior,” the spokesperson added.

The Future of Life Institute is an advocacy group that led a high-profile campaign by researchers to pause AI development, citing existential risks from the technology. Elon Musk, founder and CEO of xAI, is an advisor to the institute.

Among its efforts, the group in comments submitted to the White House earlier this year said the National Institute of Standards and Technology should play a crucial role in controlling the export of AI models by establishing related criteria.

FLI says in its press release, “This is the second iteration of FLI’s AI Safety Index, which was first published in December of 2024. Since then, OpenAI overtook Google DeepMind in the rankings partly by improving their transparency, publicly posting a whistleblower policy, and sharing company information for this Index.”

It notes, “Chinese AI firms Zhipu.AI and Deepseek both received failing overall grades. However, the report scores companies on norms such as self-governance and information-sharing, which are far less prominent in Chinese corporate culture. Furthermore, as China already has regulations for advanced AI development, there is less reliance on AI safety self-governance.”

The group says the grades are based on each company’s survey responses and publicly available documents.

FLI explains in the executive summary of the new report, “The Future of Life Institute's AI Safety Index provides an independent assessment of seven leading AI companies' efforts to manage both immediate harms and catastrophic risks from advanced AI systems. Conducted with an expert review panel of distinguished AI researchers and governance specialists, this second evaluation reveals an industry struggling to keep pace with its own rapid capability advances -- with critical gaps in risk management and safety planning that threaten our ability to control increasingly powerful AI systems.”