Inside AI Policy

June 15, 2024

Home Page

Home Page

By Rick Weber

The Senate Armed Services Committee approved a fiscal 2025 National Defense Authorization Act that would expand the authority of the Pentagon’s chief artificial intelligence officer to engage with military personnel, among other AI-related provisions, after the chairman of the committee voted against the overall bill because of funding disputes.

By Mariam Baksh

A challenge competition the National Institute of Standards and Technology is hosting under President Biden’s executive order on artificial intelligence is just the beginning of a standards development process being undertaken by “GenAI,” a new project which will eventually include datasets that are more reflective of the real world, according to a NIST official.

By Rick Weber

Two of the Department of Energy’s laboratories have submitted comments on the use of artificial intelligence to boost electric grid resiliency through improved predictability and planning, while raising concerns about ensuring privacy for the data that drives those AI tools.

By Charlie Mitchell

A key technology group is praising the pro-innovation approach in a government artificial intelligence procurement bill by Senate Homeland Security Chairman Gary Peters (D-MI) and Sen. Thom Tillis (R-NC), while a digital rights leader sees significant benefits in the measure’s approach to transparency and other guardrails.

By Charlie Mitchell

Senate Majority Leader Charles Schumer (D-NY) highlighted a new bill on safe and secure government AI procurement as signaling legislative progress on artificial intelligence, while suggesting a failed bid to pass a deepfakes measure is a temporary setback.

By Rick Weber

Members of the House Appropriations Committee are expressing support for efforts by the Defense Department’s AI Chief Officer to expand access to a central data repository by data consumers and producers, as part of broader reforms that include prioritizing improved business systems over “advanced lethality” of artificial intelligence applications.

A competition launched by the National Institute of Standards and Technology to help fulfill its obligations under an executive order to create guidelines for evaluating artificial intelligence systems risks producing unreliable benchmarks that would lead to unworkable standards -- due to the lack of open-source data involved -- according to the chief responsible AI officer for the National Fair Housing Alliance.

Researchers at RAND Corp. are recommending to the Department of Energy that regulators require mandatory reporting of artificial intelligence applications within electricity systems as part of broader advice for achieving the potential reliability benefits of AI for the electric grid.

The North American Electric Reliability Corp. is telling the Department of Energy that artificial intelligence can assist in planning for future bulk-power construction projects and predicting severe weather storms to prevent potential blackouts.

As the Biden administration considers a rule associated with preventing artificial intelligence capabilities from being leveraged by foreign adversaries, the White House as well as industry should be more transparent about proposed computing power thresholds for triggering a series of reporting -- and potentially licensing -- requirements, according to fair housing advocate Michael Akinwumi.

Major technology and other business groups say the American Privacy Rights Act, a leading legislative proposal advancing in the House Energy and Commerce Committee, must be revised to fully pre-empt state privacy laws to gain their support.

A House Armed Services Committee report on its recently approved National Defense Authorization Act would require the Pentagon to report to lawmakers on the use of artificial intelligence to counter chemical and radiological threats as well as to modernize and streamline the process for declassifying sensitive information.

Sen. Amy Klobuchar (D-MN) touted her legislative proposal for setting risk-based regulations for artificial intelligence at a joint committee hearing on economic growth, where a researcher at a market-based think tank offered strong support for the plan.

Leading approaches to regulating the development and deployment of frontier artificial intelligence models are based on hypothetical risks and threaten innovation while handicapping already meager defenses against actual attacks powered by the technology, according to a researcher from George Mason University’s free market think tank.

The Commerce Department’s Bureau of Industry and Security has submitted to the Office of Information and Regulatory Affairs its proposal for a rule that would require certain developers of artificial intelligence models and operators of computer clusters to disclose information the White House has determined is important for national security.

The Alliance for Trust in AI touts industry leadership in global standards work while urging a “nuanced” approach to addressing risks related to synthetic content, in comments submitted on two National Institute of Standards and Technology draft guidances issued under President Biden’s executive order on AI.

The Information Technology Industry Council has submitted comments on three key artificial intelligence guidance documents as the National Institute of Standards and Technology wraps up a public comment period on a handful of initiatives launched under President Biden’s Oct. 30 AI executive order.

The trade group TechNet is urging the National Institute of Standards and Technology to promote consistency in international AI technical standards as well as in voluntary frameworks, while raising concerns about how social issues may be addressed in standards work, as comments begin flowing in on four AI-focused NIST guidances.

Aspen Digital has issued three “checklists” to help stakeholders including election officials and AI firms combat the illicit use of artificial intelligence tools to influence elections, with advice on countering AI-powered voter suppression, deep fakes and language-based influence operations.`

The Center for Democracy and Technology offers support for draft guidance by the National Institute of Standards and Technology on the risks posed by “synthetic content,” while urging the agency to flesh out an approach to multistakeholder engagement on issues such as content labeling and for more details on research efforts.

The security firm Trellix, in a new report based on a survey of CISOs in North America, finds corporate security officers are under intense pressure amid the speed and sophistication of artificial intelligence-powered cyber attacks, and a prevailing view that AI regulation would help them secure their organization’s systems.

The U.S. Patent and Trademark Office is reopening for two weeks the comment period on “Inventorship Guidance for AI-Assisted Inventions,” to accommodate additional parties hoping to weigh in on the document after the comments window closed in May.

The World Privacy Forum, a member of NIST’s AI Safety Institute Consortium, is urging the agency to hew closely to international standards development processes as it advances synthetic content guidance under an artificial intelligence executive order and in work at the new consortium.

Commerce Secretary Gina Raimondo and her counterpart in the Singapore government have announced shared principles and plans for collaboration on artificial intelligence, based on recognition of the "tremendous potential of AI for good” along with “the need to mitigate the challenges that come with the rapid, global proliferation of AI.”

BSA-The Software Alliance, in comments on four National Institute of Standards and Technology guidance documents on artificial intelligence, points to its own framework for safe and secure AI while urging support for industry tools and calling for policies that back “full legal use of data to train AI systems.”

Researchers at RAND Corp. have analyzed malicious actor threats to core elements of foundational artificial intelligence models in order to develop a series of security steps that the researchers say should be urgently adopted by policymakers and AI developers.

Health insurance giant Humana has filed a “sealed” response to an amended complaint by class-action plaintiffs accusing the company of using artificial intelligence to skirt coverage of healthcare claims, with Humana citing the Health Insurance Portability and Accountability Act to block the public release of sensitive data.

A federal judge in California has granted Google’s request to have a class-action lawsuit dismissed, citing a recent ruling by a different judge in the same court that threw out a similar copyright infringement case against a different artificial intelligence developer.

Tech giant Microsoft argues that allowing the New York Times to dramatically expand its complaint alleging copyright infringements from the training of artificial intelligence models would overburden the company to the point of denying it a just defense, in a recent filing with a federal New York district court.

A federal district court issued a protective order to prevent the public disclosure of information to be collected and exchanged in the New York Times’ landmark copyright infringement lawsuit against OpenAI and Microsoft over the training of ChatGPT.

The Department of Energy’s laboratories are taking the lead for the federal government on testing and validating artificial intelligence frontier models under a research accelerator initiative announced last month and further detailed in testimony at a congressional hearing on AI and economic growth this week.

Members of the House and Senate Joint Economic Committee were warned about overregulating emerging artificial intelligence products and services, at a hearing that emphasized the potential benefits of AI for reducing healthcare costs and promoting scientific breakthroughs.

The Center for Security and Emerging Technology, a Georgetown University think tank, says the federal government is poised to set enforceable artificial intelligence best practices for suppliers through procurement regulations, and that experiences in the cyber domain provide important lessons on implementing those AI rules.

A new report from OpenAI describes trends around malicious actors’ uses of the firm’s artificial intelligence products and models and efforts to counter such activity, as the generative AI leader moves to burnish its credentials for safety and security.