Inside AI Policy

February 24, 2024

Home Page

Home Page

By Rick Weber

The National AI Advisory Committee is considering recommendations for developing “minimum standards” for data transparency to guide the training and development of artificial intelligence models to ensure accuracy and trustworthiness.

By Charlie Mitchell

The National Institute of Standards and Technology is about to release its first update in over five years to the landmark cybersecurity framework, and “CSF 2.0” is seen by industry leaders as a key tool for securing artificial intelligence systems that will be integrated into other AI workstreams.

By Charlie Mitchell

The R Street Institute, a “free market solutions” think tank, is urging the new House AI task force and other congressional leaders to incorporate 10 principles into their work on artificial intelligence, with a focus on promoting innovation, regulatory flexibility, and pre-emption of state and local AI laws that could hinder development of the AI marketplace.

By Rick Weber

The Center for Democracy and Technology argues that vague legal “duty of care” requirements for online platform companies in the recently revised Kids Online Safety Act would have the unintended consequence of politically driven censorship of content, in violation of First Amendment free-speech protections.

By Mariam Baksh

The National Fair Housing Alliance’s Responsible AI Lab will be working to inform the guidelines NIST must produce under President Biden’s Executive Order 14110 on testing AI systems for discrimination, according to an announcement from the group.

By Charlie Mitchell

The RAND Corporation, a security-focused think tank, offers insights on uses and limitations of red-teaming when it comes to testing artificial intelligence systems, in comments to the National Institute of Standards and Technology.

The American Bankers Association touts the financial sector’s approach to artificial intelligence risk management under existing regulatory structures in comments to the National Institute of Standards and Technology, while weighing in on roles and responsibilities for safe and secure AI and other issues NIST is charged with addressing under a 2023 executive order.

Policymakers looking to foster a trusted digital ecosystem in the new age of artificial intelligence should allow access to publicly available data while attempting to limit the amount of data -- and computational resources -- available to potentially malicious actors for training powerful models, Google said in a report on how advances in the technology stand to improve cybersecurity.

Salesforce, a leading cloud-based software service provider, offers its take on the hot-button topic of “roles and responsibilities in the AI value chain” and on other issues related to the National Institute of Standards and Technology’s tasks under the artificial intelligence executive order issued last fall, in a set of detailed comments to NIST.

A group of researchers at the National Institute of Standards and Technology is pointing to the 1979 “Belmont Report” on the use of human subjects in research as an important piece of guidance for “ethical AI research” based on “respect for persons, beneficence and justice.”

A bill introduced by Senate Homeland Security Chairman Gary Peters (D-MI) and Sen. Eric Schmitt (R-MO) would task the National Institute of Standards and Technology with developing a framework that details the job roles, knowledge, and skillsets associated with artificial intelligence in a bid to improve national competitiveness.

A compressed and increasingly crowded legislative calendar will likely limit the Senate’s ability to pass legislation on artificial intelligence beyond increased research funding and provisions in upcoming authorizations such as the annual national defense policy act.

A medical tech company official told senators that current regulatory structures are adequate for protecting against the potential harms from artificial intelligence, suggesting additional statutory authority for regulators is unnecessary at a time when lawmakers are considering how and whether to legislate on AI.

An upcoming Senate report on areas of consensus for legislation on regulating artificial intelligence, based on discussions during a series of closed-door meetings that concluded late last year, is expected to jump-start the drafting of legislation in committees over the next few months, even amid an increasingly crowded and compressed legislative calendar.

The Center for American Progress, a liberal policy nonprofit, says NIST’s work on generative artificial intelligence under President Biden’s AI executive order takes on added importance amid what it sees as slow-moving regulatory and legislative efforts, and emphasizes the central roles of the agency’s AI risk management framework and the “AI bill of rights” released in 2022 by the White House.

One area where the Equal Employment Opportunity Commission is looking to exercise its authority to enforce longstanding civil-rights law is in the use of artificial intelligence to assess workers’ productivity by closely tracking their activities, according to EEOC Chair Charlotte Burrows.

The Hacking Policy Council, which focuses on best practices for technology transparency, emphasizes the security and non-security distinction between “AI testing and red-teaming” in comments to the National Institute of Standards and Technology on the agency’s tasks under President Biden’s executive order on artificial intelligence.

The Center for Democracy and Technology is arguing that standards for measuring risks from generative artificial intelligence, to be developed by the National Institute of Standards and Technology, must be valid, reliable and subjected to outside verification because these measurements are core to governing foundational AI models.

Industry concerns about intellectual property associated with transparency efforts in artificial-intelligence policy would be fully addressed by establishing a system of third-party validation, according to a tech savvy advocate for fair housing.

Microsoft is taking action to ensure proper use of artificial intelligence services by disabling accounts linked to malicious actors and limiting their access to resources, according to a new report detailing threat tracking and prevention efforts in partnership with OpenAI.

The Center for Democracy and Technology is developing guidance on protecting the upcoming election from threats related to artificial intelligence, amid an apparent standoff in Congress over proposals to ban deepfakes and other AI-powered deceptive practices.

The primary benefit of assurance labs being established by the Coalition for Health AI is to provide individuals with insight into the artificial intelligence-powered automated systems proliferating in the sector, according to a co-chair of the new nonprofit.

The German Marshall Fund’s Alliance for Securing Democracy has released recommendations for state election officials who face growing threats of misinformation and even violence from the emergence of artificial intelligence, saying these new technologies could be a tipping point for security threats that to date have been successfully repelled.

The Commerce Department has announced the appointment of hundreds of members representing a broad swath of the economy to an artificial intelligence research consortium that will support the National Institute of Standards and Technology’s AI Safety Institute established under President Biden’s AI executive order.

A joint hearing by House Science subcommittees demonstrated strong bipartisan support for funding and continued congressional backing for a national artificial intelligence research hub being run as a pilot project by the Biden administration.

Companies developing and implementing foundational artificial intelligence dismissed certain safety concerns associated with the particularly large systems in comments to the National Institute of Standards and Technology.