The ELVIS Act in Tennessee prohibiting unauthorized use of recording artists’ voices and likenesses through AI-generated content and other related state and federal proposals are “credit positive” for copyright holders, a new report from Moody’s Ratings says, while noting that collaboration between the music and tech industries is still needed to create an equitable and profitable environment for different stakeholders.
September 19, 2024
WELCOME. LEARN ABOUT INSIDE AI POLICY →
Home Page
Home Page
The Federal Election Commission, as expected, voted to reject a petition by advocacy group Public Citizen for a rulemaking targeting the use of deceptive AI-generated content by campaigns and their allies, with a Democratic commissioner urging Congress to enact bipartisan legislation pending in the Senate.
A new report from the Federal Trade Commission examines the “data practices” of major social media and streaming companies and raises concerns over their uses of personal data in artificial intelligence systems, inadequate human review of AI-driven decision-making, and “the adequacy of the companies’ data handling controls and oversight.”
The House Administration Committee has announced a “foundational” policy for the use of artificial intelligence by congressional staff and lawmakers, which the committee says is intended to evolve in response to emerging legal and ethical concerns related to the nascent yet increasingly pervasive technology.
The House Energy and Commerce Committee approved by voice vote, with key Democrats raising concerns, the Kids Online Safety Act, which would target artificial intelligence and other algorithms that rank the content viewed on social media by minors.
Stakeholders expressing concern over catastrophic risks from the development of foundational artificial intelligence models advocated the use of controversial thresholds for mitigating misuse of the technology, in comments to the National Institute of Standards and Technology.
Sen. Amy Klobuchar (D-MN) says Senate Minority Leader Mitch McConnell (R-KY) is blocking floor votes on committee-approved bills that are intended to protect the upcoming election from the threat of deepfakes and other deceptive artificial intelligence content.
Draft guidance the National Institute of Standards and Technology has issued to help developers of dual-use AI foundation models mitigate risks of the technology being misused creates confusion around the agency’s consideration of elements like bias and discrimination, HackerOne said, amid a chorus of stakeholder feedback on the document.
The Business Roundtable says artificial intelligence risk management should be based on existing regulations and industry best practices and focused on “high-risk use cases in a nuanced manner,” in a white paper offering recommendations for policymakers on ways to create “guardrails” that allow for robust innovation.
The Business Roundtable, representing CEOs of over 200 major companies, is urging Congress to codify the U.S. AI Safety Institute and the National AI Research Resource as key steps in support of artificial intelligence innovation, in a package of white papers delivered as lawmakers position several AI-related bills for year-end consideration.
The House Energy and Commerce Committee is slated to mark up the Kids Online Safety Act on Sept. 18, possibly setting the legislation which lists artificial intelligence algorithms for regulation on a course for final passage this year, after the Senate in July approved its version of the bill with overwhelming bipartisan support.
Senate Energy and Natural Resources Chairman Joe Manchin (D-WV) renewed his push for passage of an energy permitting reform bill by citing growing concerns about meeting the electricity demands for fueling U.S. leadership on the development of artificial intelligence technologies.
An upcoming Senate Judiciary subcommittee hearing will feature testimony from former Google and OpenAI employees who have questioned the safety practices of tech companies leading the charge to develop and deploy artificial intelligence products and services.
The software industry has sent a letter to Senate leaders in advance of floor consideration of the annual defense authorization bill asking that two amendments offered by Majority Leader Charles Schumer (D-NY) on securing artificial intelligence technologies be dropped, among other revisions and statements of support for the legislation.
A presidential advisory committee on artificial intelligence has drafted a framework for responsible use by law enforcement of facial recognition technologies, acknowledging that the controversial surveillance method is already being used and the group is striving to offer advice on civil rights and other protections.
The Information Technology Industry Council is urging NIST’s Artificial Intelligence Safety Institute to develop additional guidance on red-teaming, roles and responsibilities, and transparency in comments on the agency’s draft document on managing risks from powerful AI “dual-use foundation models,” one of the key deliverables under President Biden’s AI executive order.
The National Security Council has met its deadline for submitting a memo to the president that will be key to holding U.S. intelligence agencies accountable for their use of artificial intelligence, according to an official from the Central Intelligence Agency.
A presidential advisory committee has unanimously approved a “checklist” as guidance for law enforcement agencies to field test the use of artificial intelligence, following up on earlier recommendations from the group that local police departments and federal law enforcement authorities should conduct such testing.
The Center for Democracy and Technology has released a report that offers recommendations for artificial intelligence developers to limit risks for disabled voters, saying the increased use of chatbots in elections poses particular threats to ensuring polling access for this vulnerable subpopulation.
A senior advisor to the Center for Democracy and Technology, commenting on Nevada’s contract with Google to more quickly process the state’s backed-up appeals for unemployment benefits, said the development presents an actual danger of using the technology.
The U.S. Chamber of Commerce is urging the National Institute of Standards and Technology to make a handful of significant changes to its draft risk management guidance on “dual-use” artificial intelligence foundation models, flagging concerns over potential negative impacts on open source AI ecosystems and on how the document assigns responsibilities throughout the AI life cycle.
The National Artificial Intelligence Research Resource is crucial for leveling the playing field so public and smaller entities can keep pace with the mega companies developing the technology, according to a digital rights advocate who highlighted a need for transparency as legislation to codify the effort advanced out of the House Science Committee.
National Institute of Standards and Technology Director Laurie Locascio opened a Sept. 11 forum on standards development by pledging to redouble collaborative efforts in an area crucial to artificial intelligence, including plans for formally seek more public input on the implementation roadmap for the agency-led National Standards Strategy for Critical and Emerging Technology.
The United States has signed the “Framework Convention on artificial intelligence and human rights, democracy, and the rule of law” sponsored by the Council of Europe and touted as the first legally binding treaty on AI safety.
The Information Technology and Innovation Foundation says U.S. officials should focus on speeding adoption of artificial intelligence technologies under a comprehensive national strategy rather than trying to “contain” China’s advances, even as various metrics show that Beijing is closing the gap with the United States in key areas of AI development.
Palantir, a software giant focused on cyber and artificial intelligence, has named former Rep. Mike Gallagher (R-WI) to lead its defense unit, tapping the one-time chair of a China select committee whose report on U.S. venture capital fueling growth in China’s AI and semiconductor sectors highlighted issues now being considered in a Commerce Department rulemaking.
Google is asking a federal district court to dismiss, again, a revised lawsuit accusing the company of violating copyright protections when it scraped the internet for information to train Bard, its generative artificial intelligence tool.
Plaintiffs in a class-action lawsuit against OpenAI, Microsoft and its subsidiary GitHub are asking a federal district court in California to reject OpenAI’s arguments in trying to block plaintiffs from appealing the court’s ruling earlier this summer that threw out most of the allegations in the copyright infringement case.
The Authors Guild representing plaintiffs in a class-action lawsuit against OpenAI is asking a federal district court to approve a proposed order for inspecting how the tech company trains its generative artificial intelligence models, an issue that is core to the copyright infringement allegations against the company.
OpenAI is telling a federal district court that training artificial intelligence models involves the use and production of information not intended for human consumption, to argue that constitutes “fair use” of the information to reject allegations of copyright infringement.
The Electric Power Research Institute has offered a framework for utilities to manage new cybersecurity risks posed by the emergence of artificial intelligence, saying use of generative AI technologies to optimize energy production also presents risks to the integrity of data systems.
The Energy Department is seeking public comment on data governance among other issues related to its major research strategy announced a few months ago to harness the power of its 17 laboratories to boost the nation’s artificial intelligence capabilities.
Artificial intelligence scholars from various institutions are highlighting the National Institute of Standards and Technology’s failure to account for “marginal risk” in drafting guidance for developers to manage the threat of foundational AI models being misused.
The Securities Industry and Financial Markets Association is offering policymakers a tutorial on how existing risk-based regulation in the financial sector promotes both the safety and the beneficial use of artificial intelligence, while cautioning against imposing controls that it says would “stifle” innovation.