Inside AI Policy

April 29, 2024

Home Page

Home Page

By Charlie Mitchell

The Department of Homeland Security has named 22 members to a new Artificial Intelligence Safety and Security Board mandated by President Biden’s executive order on AI, with a diverse roster including OpenAI’s Sam Altman, Arati Prabhakar of the White House Office of Science and Technology Policy, Maya Wiley of the Leadership Conference on Civil and Human Rights and other industry leaders and government officials.

By Mariam Baksh

The Senate Commerce Committee will hold an executive session May 1 to consider a package of bills that includes the CREATE AI Act and the Future of AI Innovation Act, both of which would codify efforts already underway in the Biden administration.

By Rick Weber

The Justice Department’s National Institute of Justice is seeking public comment on developing a report to the president on how the use of artificial intelligence could promote civil and criminal justice while preventing the emerging technologies from exacerbating discriminatory practices in sentencing and police surveillance, among other concerns, under President Biden’s AI executive order.

By Charlie Mitchell

The U.S. Chamber of Commerce is urging the Justice Department to provide greater clarity over covered entities and exemptions, and to take into account potential impacts on U.S. business operations in China, in comments on a proposed rulemaking that would restrict transfers of “bulk sensitive personal data or government-related data” to adversarial nations using artificial intelligence to target U.S. citizens.

By Charlie Mitchell

The Biden administration has announced the preliminary terms for the recently announced $6.1 billion award to Micron under the CHIPS and Science Act for advanced memory chip production projects in New York and Idaho that will play a foundational role in artificial intelligence development, plus new training hubs to meet high-tech and other workforce needs.

By Rick Weber

Attorneys are advising banks on the value of data encryption policies as financial firms consider adopting artificial intelligence technologies to enhance customer services and improve financial risk assessments, while warning of an anticipated onslaught of new AI-related regulations.

The Information Technology Industry Council is raising extensive concerns over the Department of Justice’s advance notice of proposed rulemaking on transfers of personal and government bulk data to foreign adversaries, calling it overly broad and complex and urging more time for stakeholder engagement before DOJ moves toward a final rule.

A consortium of over 200 public and private research universities is calling on the Department of Justice to ensure that non-federally funded researchers have access to exemptions in an upcoming rule to prohibit transfers of “bulk sensitive personal data” to foreign adversaries using AI to target U.S. citizens.

The banking industry is calling for “nutrition labeling” by the tech sector of its artificial intelligence models to validate the source and authenticity of the data used to train generative AI models, in an effort to counter fraud and other emerging AI risks, following a meeting with Treasury Department officials about their recent report on AI benefits and risks.

BSA-The Software Alliance urged several changes in draft privacy legislation related to data minimization, addressing risks of bias in AI systems, and clarifying responsibilities of service providers, in a letter sent to lawmakers prior to a key House Energy and Commerce subcommittee hearing.

A Senate Judiciary subcommittee was told there’s no single solution to countering the growing threat of deepfakes for the upcoming election, even as witnesses disagreed over the ability to even identify such artificial intelligence-generated replicas of images and voices.

Legislation restricting data brokers’ transactions with designated “foreign adversaries” has been folded into a “sidecar” measure as part of a House package of emergency foreign aid bills, which contains assistance for Israel, Ukraine and the Indo-Pacific including Taiwan.

Senate Commerce Chair Maria Cantwell (D-WA) and a bipartisan group of committee members active on artificial intelligence are offering legislation to authorize the Biden administration’s AI Safety Institute and to support advanced-tech innovation and U.S. competitiveness and security.

The growing use of algorithms for decision making by industry sectors across the economy has led to uneven enforcement of federal antidiscrimination laws which could be addressed by a new privacy compromise, according to a leading civil rights organization that is recommending changes to draft legislation being considered by the House Energy and Commerce Committee.

A presidential advisory group on artificial intelligence was told to recommend that equal employment officials issue federal guidance on the use of AI by employers to screen job candidates, including mandatory auditing and reporting to ensure transparency in the hiring process.

The Biden administration has announced a preliminary agreement with Samsung Electronics to build out semiconductor manufacturing operations at two sites in central Texas with direct funding of up to $6.4 billion under the CHIPS and Science Act and $40 billion in private investments by the company, adding to a growing list of advanced technology companies that have received support from the CHIPS Act in the last few months.

The U.S. Patent and Trademark Office has issued guidance for agents and attorneys on artificial intelligence issues they must “navigate” when practicing before the USPTO, describing existing rules that apply to uses of AI as well as potential benefits and risks.

The Department of Housing and Urban Development may restrict public housing authorities from using federal funds on facial recognition technology under Executive Order 14110, according to testimony filed with the U.S. Commission on Civil Rights.

Class-action plaintiffs are accusing health insurance giant Humana of inappropriately profiting from the use of its artificial intelligence model to assess patient claims, arguing the nH Predict AI Model makes “unrealistic” recovery predictions to deny insureds treatment under the Medicare Advantage program.

The Commerce Department is posing dozens of questions to industry experts and other stakeholders on how artificial intelligence technologies, especially generative AI, can “correctly and responsibly” make use of the department’s vast public data assets, in a new request for information.

The Information Technology Industry Council is urging Congress and federal officials to put quantum at the center of artificial intelligence and other emerging-technology policy deliberations taking place globally.

Google is accusing class-action litigants of “imagined” damages from the data and information used to train the tech giant’s generative artificial intelligence models, in the latest filing in a high-profile case that could set new legal standards for developing AI technologies.

The good-government advocacy group Common Cause has released a report warning that artificial intelligence and deepfake videos and audio will worsen disinformation threats to the 2024 election, even as officials have become adept at identifying and mitigating efforts by those seeking to disrupt and sway the electoral process.

The potential for large language models like OpenAI’s ChatGPT to materialize bias for a political party is one of several trends observed by Stanford University’s institute for Human-centered Artificial Intelligence that indicate both optimism and concern accompanying the technology’s rise.

Open-source researchers at EleutherAI have submitted comments to the National Telecommunications and Information Administration describing how generally closed foundation models can present the same -- or even greater -- safety and security risks than generally open models.

Department of Energy researchers said methods for “red teaming” the safety of artificial intelligence technologies as required under President Biden’s executive order are still being developed, including establishing “best practices” for tests that have their origins in cybersecurity.