Inside AI Policy

July 22, 2024

Home Page

Home Page

By Charlie Mitchell

The National Institute of Standards and Technology is slated to produce several key deliverables on July 26 under directives in President Biden’s executive order on artificial intelligence, including a risk management framework profile for generative AI and guidance on AI and secure software, as well as a plan for international engagement on AI standards.

By Rick Weber

President Biden’s decision to end his reelection bid amid growing calls from congressional Democrats and allies comes at a crucial time for implementation of his executive order to promote the trustworthy development and use of artificial intelligence, which has been a major policy organizing vehicle for his administration.

By Charlie Mitchell

The Information Technology Industry Council is urging the Department of Defense to take advantage of commercial solutions as it adopts artificial intelligence, seek out multiple and varied suppliers and recognize the best practices developed by the private sector as the Pentagon crafts a “trusted AI” roadmap for the defense industrial base.

By Mariam Baksh

A Center for Strategic and International Studies fellow describes federal privacy legislation as an “opportunity” to establish a common regulatory framework for artificial intelligence with the European Union, even as fundamental elements of a privacy bill remain elusive on Capitol Hill.

By Mariam Baksh

Despite suggestions to the contrary, it is not appropriate to use general-purpose large language models to make clinical decisions, according to a leading member of Stanford University’s institute for human-centered artificial intelligence, who suggested the technology will fail to gain a green light from regulators.

By Charlie Mitchell

The House Financial Services Committee will hear from the chief risk management officer for NASDAQ, an influential fair housing activist and financial and technology industry representatives at a July 23 hearing on artificial intelligence that will put a spotlight on four related pieces of legislation.

In comments to the Department of Justice, a tech industry group promoting open markets dismissed altogether the threat of companies using algorithms to collude when setting their prices.

The Department of Energy has issued a roadmap for the “Frontiers in Artificial Intelligence for Science, Security and Technology,” or FASST, initiative designed to put DOE’s vast research and technical infrastructure to work behind advancing U.S. leadership and harnessing AI for the public good.

The U.S. Patent and Trademark Office has issued updated guidance addressing “subject matter eligibility” for patents of inventions related to artificial intelligence and other emerging technologies, under a directive in President Biden’s AI executive order.

The Federal Trade Commission flagged crucial details for consideration in the policy debate over how to enable access to foundational artificial intelligence models for fostering competition and innovation while guarding against potential harms associated with misuse of the technology.

Amba Kak, a leader of the AI Now Institute, in a recent congressional appearance said binding federal privacy standards would provide a “foundational toolkit” for applying accountability to artificial intelligence companies, while emphasizing the need for data minimization and impact assessments, and for addressing algorithmic biases.

Sen. Amy Klobuchar (D-MN) has offered as amendments for an upcoming defense policy bill debate two of her bills for protecting elections from the threat of artificial intelligence which cleared her Rules Committee by partisan votes.

President Biden’s pick to be the Defense Department’s first assistant secretary for cyber policy, Michael Sulmeyer, told a Senate panel last week that accelerating the integration of artificial intelligence capabilities into military operations will be crucial to furthering the Pentagon’s cybersecurity defenses.

Sen. J.D. Vance (R-OH), former President Trump’s just-announced pick for vice president, weighed in with concerns about “over-regulation” of artificial intelligence at a recent Senate Commerce hearing, offering another insight into a future Trump administration’s AI policies after Republicans last week approved a party platform pledging to revoke President Biden’s AI executive order.

The Center for Democracy and Technology shared best practices for collecting and managing data about disabled individuals with the Office of Science and Technology Policy as the White House continues working to implement its executive orders on equity and artificial intelligence.

The technology industry and analysts are studying the signs for potential policy shifts on artificial intelligence and other tech issues that could accompany the election of former President Trump and his vice presidential nominee, Sen. J.D. Vance (R-OH), with stakeholders expressing the hope that tech and AI policy would remain on a bipartisan trajectory even as a GOP administration tilts the balance toward supporting innovation and away from regulation.

The U.S. Chamber of Commerce urged senators to act first on privacy before moving to regulate artificial intelligence while TechNet said proper privacy policy would address AI concerns, and both groups sought extensive changes in legislation drafted originally by Senate Commerce Chair Maria Cantwell (D-WA) and House Energy and Commerce Chair Cathy McMorris Rodgers (R-MI), in separate letters to Senate Commerce leaders ahead of a hearing last week.

A leading legal officer at the NAACP told a presidential advisory group on artificial intelligence to recommend to the Biden administration to prohibit AI uses seen as discriminatory, among other proposals for protecting civil rights amid the growing pervasiveness of AI technologies.

Cybersecurity and Infrastructure Security Agency Director Jen Easterly will help open the annual Black Hat conference with a keynote address on election security that will cover threats from generative AI, while she will also discuss artificial intelligence in a separate presentation on “technology and the four V’s.”

The Information Technology Industry Council has released an “AI accountability framework” that includes “consensus” best practices for developers, deployers and “integrators” for so-called high-risk artificial intelligence systems including frontier models.

A group of senators active on artificial intelligence issues have introduced a bill to create a nonprofit institution in support of standards work at the National Institute of Standards and Technology seen as essential to setting global safeguards around artificial intelligence, giving a boost to legislation that’s already cleared the House Science Committee.

Tech-sector groups and sources are downplaying the practical ramifications of a commitment in the Republican National Committee’s 2024 platform to revoke President Biden’s executive order on artificial intelligence, although the policy plank reflects intensifying GOP opposition to AI regulation in contrast to bipartisan calls for greater oversight of the technology that dominated policy discussions earlier this year.

The Justice Department’s seizure of Russia-based social media accounts on X that were linked to an artificial intelligence-powered disinformation campaign in the United States and elsewhere includes details on how that threat could have spread to other platforms with recommended mitigation measures for the tech industry.

An artificial intelligence tool the State Department has been using to monitor social media platforms will be included in an updated inventory of ways the technology is being employed by the agency, as required under President Biden’s executive order on artificial intelligence, according to State.

An artificial intelligence system called “Northstar” is informing State Department efforts to shape public narratives and policy by monitoring media reports and social platforms, according to senior officials.

Commerce Secretary Gina Raimondo defended the administration’s decision to use the Defense Production Act to require artificial intelligence developers to report to the government under President Biden’s executive order for safe and secure AI, during a House hearing on the Commerce Department’s budget request.

OpenAI is asking a federal district judge to force the Authors Guild representing class-action plaintiffs in landmark copyright litigation to release research and other withheld information that the company says would undermine the lawsuit’s claims about harms of generative artificial intelligence to writers.

Getty Images is accusing Stability AI of unfair competition in violation of trademark and copyright laws, in a revised complaint filed with a Delaware federal court, a move that marks growing legal pressures on AI developers over the data and information they use to train their generative models.

Two recent landmark decisions by the Supreme Court that could set new legal standards for protecting free speech and restricting the authority of regulators produced comments by justices citing the potential role of artificial intelligence in complicating implementation of those rulings.

A federal district court has rejected allegations by class-action plaintiffs that GitHub’s artificial intelligence-powered tools for managing and sharing code violated copyright protections, while allowing claims of a breach of open-source licensing to proceed.

Two new reports from Moody’s Ratings examine the “surging” demand for data center capacity fueled by artificial intelligence and the associated “huge capital outlays” by the biggest tech companies, noting that even these deep-pocketed “hyperscalers” face financial as well as regulatory risks around the investments.

A leader of the U.S. Black Chambers told a presidential advisory group on artificial intelligence about the potential benefits of a pending bill in the Senate for helping small and minority-owned businesses gain access to the emerging technology.

Researchers at Georgetown University’s Center for Security and Emerging Technologies have crafted a report on artificial intelligence governance that urges policymakers to develop an AI incident tracking and reporting regime, as well as to bolster “AI literacy” and develop flexible regulatory policies for the rapidly evolving technology.

The Data and Trust Alliance, a cross-sector industry group, has released a package of “data provenance standards” to help businesses assess the quality of datasets used in artificial intelligence models, with an eye toward building trust in AI systems among businesses and regulators alike.