Inside AI Policy

October 8, 2024

Home Page

Home Page

By Charlie Mitchell

The National Institute of Standards and Technology in a new request for information asks how awards and recognition programs can encourage participation in standards development work related to artificial intelligence and other emerging technologies, as well as on best practices for standards workforce education while also seeking general input on the national standards strategy.

By Charlie Mitchell

NASA’s plan for complying with Office of Management and Budget guidance on federal agencies’ uses of artificial intelligence includes a new AI strategy board and a working group to lead on AI policy at the space agency, with a focus on enabling deployment of the technology and prohibiting its use only as a “last resort.”

By Mariam Baksh

Implementing policies to guide the trustworthy use of artificial intelligence requires steadfast data management, lapses in which could have dire consequences, particularly for immigrants and other shifting demographics, according to a notable data-governance professional proposing transparency measures for national security components.

By Rick Weber

Sen. Edward Markey (D-MA) won a commitment from Sen. John Hickenlooper (D-CO), chair of a key technology subcommittee, to seek bipartisan support for a proposal to require the federal government to assess the potential environmental and energy impacts of artificial intelligence, setting up for a process that will likely extend into the next Congress.

By Charlie Mitchell

The Wiley law firm in a new client alert offers an early take on how the Office of Management and Budget’s artificial intelligence procurement guidance to federal agencies will affect contractors and serves as a “building block” for regulation, with requirements expected to show up “on a very quick timeline.”

By Mariam Baksh

With pressure mounting on intelligence and law enforcement agencies to be more open about how they plan to use artificial intelligence under President Biden’s executive order regarding the technology, a key Justice Department official defended maintaining some level of secrecy to preserve the viability of their procedures.

The National Institute of Standards and Technology’s AI Safety Institute is seeking public comments on developing “best practices” for mitigating the risks from dual-use artificial intelligence systems that can be deployed for medical research breakthroughs as well as to develop bio-chemical weapons and other biohazards and pathogens.

The conservative advocacy group Americans for Prosperity is calling on the Federal Communications Commission to abandon a proposal requiring disclosure of artificial intelligence uses in political advertising, saying it would violate the First Amendment and trample over jurisdictional lines, and that the FCC “lacks the expertise to effectively navigate the complexities of AI.”

The U.S. Agency for International Development and the State Department are suggesting governments of low- and middle-income countries consider providing tax breaks and other incentives for companies to responsibly pursue artificial intelligence opportunities across the globe.

The National Institute of Standards and Technology before the end of the year will launch a public competition to participate in a $100 million research and development project on how artificial intelligence can be used in “sustainable” semiconductor manufacturing, the agency says in a notice of intent.

The Software & Information Industry Association and education groups are calling for “member-level” briefings to refocus lawmakers on artificial intelligence issues affecting the education sector, in letters to the leaders of the House and Senate AI working groups.

Federal Communications Commission Chair Jessica Rosenworcel said legislation sponsored by Sen. Amy Klobuchar (D-MN) would create a legally defensible standard for governing the use of artificial intelligence in political advertisements as states prepare for the upcoming elections by passing their own laws.

Securities and Exchange Commission Chairman Gary Gensler warned of market dominance among cloud service providers as well as AI deployers’ responsibility when it comes to preventing AI-empowered fraud, in explaining his approach to artificial intelligence regulation at a House Financial Services oversight hearing.

Sens. Martin Heinrich (D-NM) and Mike Rounds (R-SD), co-chairs of the Senate’s AI working group, have introduced legislation to leverage the power of artificial intelligence to enhance the U.S.’s pandemic preparedness through the establishment of a MedShield program as recommended by the National Security Commission on AI.

The tech firm Adobe points to new authentication technology and industry efforts on standards as achieving the transparency goals set out by the Federal Communications Commission in its proposed rulemaking on identifying uses of artificial intelligence in political advertising.

White House officials are working on more granular guidance for agencies to implement President Biden’s executive order on artificial intelligence, as federal agencies publish their compliance plans under an Office of Management and Budget memo requiring the discontinuation of any safety- or rights-impacting use cases presenting unmitigable risks.

The private sector is making enormous investments in artificial intelligence research and development and the federal government needs to do its part with greater funding for R&D in areas like public health and defense, Arati Prabhakar, director of the White House Office of Science and Technology Policy, said at a U.S. Chamber of Commerce event.

The National Institute of Standards and Technology’s risk management framework for artificial intelligence differs from the agency’s landmark cybersecurity framework due to a lack of references, the most urgent of which relate to standards for testing, evaluation, validation and verification, according to a leading technology trade association.

MITRE along with industry partners have embarked on an artificial intelligence incident-sharing initiative designed to bolster knowledge around the threats facing AI systems and the tools available to operators, while furthering collaboration across industry sectors and stakeholder communities.

Among the Information Technology Industry Council’s priorities for shaping a code of practice entities might use in demonstrating compliance with the European Union’s AI Act is to ensure it doesn’t reach outside the legislation’s purview.

Leading officials at tech giant NVIDIA are arguing the efficiency benefits from artificial intelligence at a time when the increased electricity demands of data centers to support the technology are raising concerns about the impacts for the power grid and the potential for exacerbating the climate crisis.

The nonprofit tech company Mozilla in a new report urges development of a “public AI” ecosystem as a counterweight to a few major technology companies controlling the development and deployment of artificial intelligence and to promote “de facto” safety standards in the absence of formal regulation.

To create inclusive artificial intelligence infrastructure that would facilitate sustainable development goals and equity in low- and middle-income countries as the technology develops, the U.S. State Department and four partner agencies said policymakers should support the proliferation of open AI systems.

The United Kingdom’s AI Safety Institute has posed a series of questions for the major artificial intelligence companies and researchers invited to participate at its Nov. 21-22 “Conference on Frontier AI Safety Frameworks,” to take place alongside a “convening” of the international network of artificial intelligence safety institutes co-hosted by the U.S. Commerce and State departments.

In an open letter signed by dozens of European AI executives and academics, Meta founder and CEO Mark Zuckerberg argued European law should uniformly permit AI developers to train foundational models on data collected from within the jurisdiction.

The Commerce and State departments have announced “the inaugural convening” of the international network of artificial intelligence safety institutes for Nov. 20-21 in San Francisco, under an initiative launched in May at the virtual AI Seoul Summit co-hosted by South Korea and the United Kingdom.

A federal district court has approved a proposed order by parties in a landmark lawsuit filed by the New York Times and other news outlets against Microsoft and OpenAI alleging copyright infringements from the training of ChatGPT, a generative artificial intelligence model.

A legal scholar argues that courts should apply a negligence-based approach to the safe development and use of artificial intelligence, which would shift liability from the AI systems to the individuals who develop those systems, with President Biden’s AI executive order serving as a potential template for establishing a “standard of care.”

The New York Times and other news outlets in a consolidated lawsuit against Microsoft and OpenAI are asking a federal court to protect from public disclosure the source codes referenced in documents to be exchanged among litigants, in a dispute over whether the tech companies violated copyright protections in the training of the generative artificial intelligence model ChatGPT.

Tech giant Meta is asking a federal district court to reject again a class-action lawsuit alleging that it violated copyright protections from the training of its artificial intelligence models, with the company apparently employing a similar strategy in its response to the revised complaint as it did in having much of the first complaint dismissed.

Security analysts at a briefing on disinformation risks for the upcoming election said emerging artificial intelligence technologies, which exacerbate and accelerate the threat of deepfakes, pose more of a sociological rather than a technological concern.

The Department of Energy is uniquely positioned due to its vast computing and laboratory resources to address concerns about increased energy demand from artificial intelligence, according to researchers at the Bipartisan Policy Center, who say DOE can contribute to improved efficiencies and expedited permitting to expand the nation’s energy capacities.

The House Science Committee has approved by voice vote a bill on tracking incidents related to artificial intelligence, a Department of Energy AI research measure, and bill to promote development of small modular nuclear reactors that could help power energy-intensive AI data centers.

The Information Technology Industry Council recommends policymakers across the globe do more to facilitate the use of small modular reactors and other “advanced” carbon-free energy sources in acknowledging an outsized role for the tech industry in environmental protection as artificial intelligence is increasingly applied.