Inside AI Policy

May 9, 2024

Home Page

Home Page

By Rick Weber

OpenAI has been ordered by a federal district court to investigate whether current and former board members and employees discussed on social media the training of its generative artificial intelligence model, ChatGPT, in response to a request by class-action plaintiffs who accuse the company of copyright infringement.

By Rick Weber

Among the first batch of projects to be supported by the recently established National AI Research Resource pilot program is a proposal to use generative artificial intelligence to improve image recognition for the purpose of “discriminative” tasks, according to a summary to project.

By Mariam Baksh

Privacy advocates are echoing calls from a new bipartisan group of senators in urging leaders of the upper chamber to support an amendment to the Federal Aviation Administration reauthorization bill that would halt the Transportation Security Administration’s rollout of facial recognition technology at airports across the country.

By Rick Weber

The House approved by voice vote a bipartisan bill requiring federal agencies to identify and set aside artificial intelligence-generated comments on proposed rules, with the intention of weeding out automated duplicates that advocacy groups might use to overwhelm and influence regulatory policymakers.

By Charlie Mitchell

OpenAI CEO Sam Altman said election security efforts related to artificial intelligence challenges have gone better than expected this year, while expressing confidence in his company’s approach to developing the groundbreaking technology and how it aligns with an emerging policy landscape, during a May 7 Brookings Institution event.

By Charlie Mitchell

Secretary of State Antony Blinken in a speech at the RSA Security conference said a handful of advanced technologies including artificial intelligence are converging and transforming society at a blistering pace, creating an urgent need for U.S. leadership on innovation as well as standards and norms.

The Information Technology Industry Council is calling on federal policymakers to back up CHIPS and Science Act funding for domestic production of advanced semiconductors with sizable investments in research aimed at artificial intelligence development, which is also authorized under the CHIPS law.

Civil society groups responding to a request for information by the Office of Management and Budget have joined forces in highlighting how artificial intelligence is different and why it should therefore prompt changes to the Federal Acquisition Regulation, with emphasis on a need to test systems before and after deployment for harms such as propagating bias against minorities.

The Transportation Department’s research agency is asking for input on how artificial intelligence can be employed in the transportation sector and where to focus research efforts, as DOT moves to meet its assignments under an executive order.

An overly broad definition of a key term in President Biden’s executive order on artificial intelligence that is aimed at deterring the use of U.S. cloud companies to train models that could be employed for malicious cyber activity might end up harming use of the technology for public benefit, a major industry group told the Commerce Department.

Senate Armed Services Committee members focused on artificial intelligence-related threats from China in the open portion of a hearing to review the intelligence community’s 2024 global threat assessment with Director of National Intelligence Avril Haines.

A Senate Judiciary subcommittee was told that free-speech rights could run afoul of a pending legislative proposal for extending copyright restrictions on audio and visual replicas produced by artificial intelligence, as senators eye revisions with the hope of moving the landmark legislation later this year.

The leaders of a Senate Judiciary subcommittee are pledging to move legislation this year to protect intellectual property from the growing threat of audio and visual replicas generated by artificial intelligence, with Sens. Chris Coons (D-DE) and Thom Tillis (R-NC) planning to introduce a bill in the coming weeks.

Just as a major industry group offered praise for the Future of AI Innovation Act, the Senate Commerce Committee has removed the bill -- and all other legislation -- from the agenda for its May 1 executive session.

The National AI Advisory Committee is developing recommendations on deploying artificial intelligence to analyze the often controversial use of body-worn cameras by local police and the resulting video footage, as some committee members raise concerns about potential privacy violations and the need to limit access to the growing volume of such footage.

The U.S. Chamber of Commerce is urging federal officials to expand the opportunity for industry engagement on the Office of Management and Budget’s inquiry into revamping procurement policy to address the benefits and risks of artificial intelligence, in comments that reiterated industry concerns over a short comment deadline under an OMB request for information.

The Information Technology Industry Council says “commercial solutions” and vendor safety assessments can meet the federal government’s needs in purchasing artificial intelligence technologies, in response to the Office of Management and Budget’s request for information on possible changes to procurement policy under the Biden administration’s AI executive order.

The National AI Advisory Committee is slated to consider a proposal for “field testing” artificial intelligence technologies for use by law enforcement agencies, with the goal of inclusion in federal guidance under Biden’s AI order consistent with the National Institute of Standards and Technology’s risk management framework for AI technologies.

The Office of Management and Budget’s request for information on implementing the procurement provisions of President Biden’s executive order on artificial intelligence is surfacing a longstanding debate about what some describe as a loophole for commercial-off-the-shelf technology, as a key industry group tells the agency to rely on the marketplace to indicate quality assurance.

A researcher at the Mercatus Center, a free-market think tank at George Mason University, makes the case that the Federal Trade Commission’s new rule banning noncompete agreements will open up the talent pool for workers with skills in artificial intelligence-related areas, boosting development of the technology in the United States.

The Department of Homeland Security has named 22 members to a new Artificial Intelligence Safety and Security Board mandated by President Biden’s executive order on AI, with a diverse roster including OpenAI’s Sam Altman, Arati Prabhakar of the White House Office of Science and Technology Policy, Maya Wiley of the Leadership Conference on Civil and Human Rights and other industry leaders and government officials.

The U.S. Chamber of Commerce is urging the Justice Department to provide greater clarity over covered entities and exemptions, and to take into account potential impacts on U.S. business operations in China, in comments on a proposed rulemaking that would restrict transfers of “bulk sensitive personal data or government-related data” to adversarial nations using artificial intelligence to target U.S. citizens.

The good-government advocacy group Common Cause has released a report warning that artificial intelligence and deepfake videos and audio will worsen disinformation threats to the 2024 election, even as officials have become adept at identifying and mitigating efforts by those seeking to disrupt and sway the electoral process.

The potential for large language models like OpenAI’s ChatGPT to materialize bias for a political party is one of several trends observed by Stanford University’s institute for Human-centered Artificial Intelligence that indicate both optimism and concern accompanying the technology’s rise.

Open-source researchers at EleutherAI have submitted comments to the National Telecommunications and Information Administration describing how generally closed foundation models can present the same -- or even greater -- safety and security risks than generally open models.

Department of Energy researchers said methods for “red teaming” the safety of artificial intelligence technologies as required under President Biden’s executive order are still being developed, including establishing “best practices” for tests that have their origins in cybersecurity.