Inside AI Policy

April 12, 2024

Home Page

Home Page

By Rick Weber

The New York court system has established an advisory group for developing safeguards on the use of artificial intelligence by judges and lawyers, with an emphasis on protecting confidential data and ensuring equity in the administration of justice.

By Rick Weber

Google is accusing class-action litigants of “imagined” damages from the data and information used to train the tech giant’s generative artificial intelligence models, in the latest filing in a high-profile case that could set new legal standards for developing AI technologies.

By Rick Weber

Federal Trade Commission Chair Lina Khan and Assistant Attorney General Jonathan Kanter are touting the success of continued coordination with European Union officials on enforcement of data uses for artificial intelligence and other advanced technologies.

By Charlie Mitchell

BSA-The Software Alliance has issued “global AI policy solutions” that call for “anchoring” federal procurement rules in the National Institute of Standards and Technology’s artificial intelligence risk management framework and for Congress to pass laws on privacy and high-risk impact assessments, among a set of recommendations in a dozen areas.

By Mariam Baksh

Sen. Mitt Romney (R-UT) associated open-sourcing artificial intelligence development with the Biden administration’s effort to limit foreign adversaries’ access to “dual use” technology, while proposing a DHS center that would connect the department’s investigations unit to law enforcement agencies.

By Charlie Mitchell

The U.S. Patent and Trademark Office has issued guidance for agents and attorneys on artificial intelligence issues they must “navigate” when practicing before the USPTO, describing existing rules that apply to uses of AI as well as potential benefits and risks.

Department of Energy researchers said methods for “red teaming” the safety of artificial intelligence technologies as required under President Biden’s executive order are still being developed, including establishing “best practices” for tests that have their origins in cybersecurity.

While highlighting potential technological solutions to address verbal deepfakes, the Federal Trade Commission stressed they would need to be implemented in accordance with appropriate policy approaches.

The trade group TechNet has organized four separate meetings with House and Senate Democrats and Republicans this week to highlight member companies’ work on artificial intelligence, including mitigating risks, as lawmakers enter a crucial legislative stretch with a list of unfinished AI business.

Acting U.S. Comptroller of the Currency Michael Hsu is pointing to a new policy in the United Kingdom as a model for assigning liability and incentivizing anti-fraud efforts by banks to counter artificial intelligence-powered “push payment” scams that disproportionately target “elders and members of vulnerable communities.”

Legislation by Senate Homeland Security Chairman Gary Peters (D-MI) and Sen. Ted Cruz (R-TX) aims to streamline government procurement processes and improve training of federal acquisition officers to manage purchases of artificial intelligence and other advanced technologies.

The Congressional Research Service is advising lawmakers considering legislation on the use of artificial intelligence by the financial services industry to consider ways to balance the potential benefits of reduced costs and improved efficiencies for the industry with mitigating increased systemic risks including amplified discriminatory practices affecting consumers.

The Office of Management and Budget’s examination of how the federal government purchases artificial intelligence products and services raises a series of questions about government-industry interaction on the advanced technology, says Wiley Rein partner Duane Pozza, who argues that upcoming policy decisions will heavily influence the development of AI itself.

The chair of the Federal Trade Commission is not ruling out the possibility that major tech companies could be collaborating to “corner the market” on artificial intelligence, an indication of her vigilance at all levels for misconduct associated with the emerging technology.

California state lawmakers are advancing several bills aiming to establish first-time accountability and safety standards for artificial intelligence technologies, including a measure that codifies into state law President Biden’s AI Bill of Rights and establishes the legislature’s intent that the private sector adhere to AI safeguards and protections.

A presidential advisory committee is considering a proposal to the White House on encouraging law enforcement agencies to test new artificial intelligence systems and products being deployed in the field, based on a “checklist” guidance document recently approved by a subcommittee of the group.

The Biden administration has taken another significant step to build out the nation’s infrastructure for developing transformational artificial intelligence and other advanced technologies by providing $6.6 billion to the Taiwan Semiconductor Manufacturing Company to complete one facility and build a third in Phoenix, AZ, to make the semiconductor chips that are crucial to U.S. leadership on AI developments.

Comments submitted to the National Telecommunications and Information Administration on the risks and benefits associated with making the model weights of foundational artificial intelligence models widely available reveal a clash between major trade associations over the contested issue of their training data.

Technology sector leaders have announced a new consortium of industry heavyweights that will develop plans for “upskilling and reskilling roles most likely to be impacted by AI,” with a first-phase goal of evaluating how artificial intelligence affects 56 specific “job roles” and providing “actionable insights” for businesses and workers.

The artificial intelligence development and research firm Anthropic has released a report on “many-shot-jailbreaking,” which bad actors could use to evade safeguards around large language models and pose an increasingly dangerous threat as LLMs grow more powerful, the company says.

Companies proposing facial age estimation technology as a way to secure parental consent required under the Federal Trade Commission’s rule for protecting the privacy of children online should consider refiling their application after the completion of a key report from the National Institute of Standards and Technology, according to a letter informing them of a unanimous decision denying the submission.

The wireless industry leader CTIA identifies artificial intelligence and other emerging technologies as “foundational” to its sector and says advancing ongoing work for private sector-led standards is essential to the national AI standards strategy being implemented by the National Institute of Standards and Technology.

A new report by the National Academy of Sciences offers advice to researchers in the United States and United Kingdom on data sharing, including recommendations on emerging artificial intelligence technologies, with a particular focus on privacy concerns at a time when officials from both countries are working on broader cooperation on transatlantic data exchanges.

A national security analyst says democratic governments should consider using deepfakes to advance certain foreign policy objectives, a recommendation that appears to push back against general calls for an across-the-board ban on use of deceptive AI-generated images and audios based on concerns over protecting critical governing institutions.

BSA-The Software Alliance is warning against applying regulatory restrictions to so-called dual-use artificial intelligence foundation models in response to a public inquiry by the National Telecommunications and Information Administration under President Biden’s Oct. 30 AI executive order.

Mozilla, the Center for Democracy and Technology and dozens of public interest groups, think tanks and researchers are pressing Commerce Department officials to recognize the benefits of “openness and transparency” in foundational artificial intelligence models, while urging caution in setting any related export controls.