The General Services Administration’s top IT procurement official is offering assistance to new chief artificial intelligence officers installed at federal agencies under President Biden’s executive order, saying inventorying uses of these technologies will be crucial to developing AI safeguards.
December 2, 2023
WELCOME. LEARN ABOUT INSIDE AI POLICY ››
Home Page
Home Page
Plaintiffs in a class-action lawsuit have filed a highly anticipated revised complaint against Stability AI after a federal judge in October rejected most of the copyright infringement allegations in the case, setting up for a potentially narrower outcome of litigation that could establish a legal standard on the data used to train generative artificial intelligence models.
Sen. Amy Klobuchar (D-MN) says draft legislation circulated in October to protect actors and other artists from the unfair use of likenesses generated by artificial intelligence was a focus of discussions this week during the Senate’s latest closed-door meeting on the government’s role in regulating AI.
As a chorus of transatlantic public interest groups calls for governments to build their own bedrock artificial intelligence systems, the Harvard Kennedy School’s Bruce Schneier says the National Artificial Intelligence Research Resource backed by key U.S. policymakers could lay the necessary groundwork.
The reimbursement policies of the Centers for Medicare and Medicaid Services have emerged as a focal point in discussions on reforms that would help advance the use of artificial intelligence in health care, taking the spotlight during a House hearing on AI and the health sector.
Meta Platforms is urging the U.S. Copyright Office, as well as Congress, to avoid imposing “additional copyright restrictions” or establishing licensing regimes as regulators and lawmakers alike consider numerous policy implications of generative AI.
The plaintiffs in a prominent class-action lawsuit against health insurance giant Cigna for its use of artificial intelligence in processing patient claims are planning to revise for a second time their complaint following a meeting with the company’s lawyers about its plans to ask the court to dismiss the case.
While 47 countries are now signed on to principles including a commitment to transparency for responsible deployment of artificial intelligence by their militaries, the details of how that will affect technology also used for commercial purposes -- as described in the recent executive order on AI -- are receiving pushback from companies that fear harm to competition in the industry.
The Cybersecurity and Infrastructure Security Agency and its British counterpart have issued artificial intelligence guidance that emphasizes software secure-by-design principles while underscoring the U.S. and United Kingdom’s determination to lead on global AI standards.
Google LLC in comments to the U.S. Copyright Office argues that current copyright law is sufficient in the face of issues raised by quickly evolving artificial intelligence products, while also pointing to its own initiatives to help consumers identify AI-generated content.
Witnesses set to testify before the House Energy and Commerce health subcommittee differ over the extent to which satisfactory oversight of artificial intelligence is occurring for tools that promise to reduce healthcare workers’ burnout by minimizing certain interactions between patients and clinicians, along with other potential benefits.
The House Energy and Commerce health subcommittee will hold a Nov. 29 hearing on artificial intelligence, the latest in a series on AI initiated by full committee Chair Cathy McMorris Rodgers (R-WA).
The bipartisan leaders of the House Armed Services innovation subcommittee have introduced a bill that would set up a working group among the nations of the “Five Eyes” intelligence alliance to coordinate on a sweeping artificial intelligence initiative focused on research, testing and deployment of AI systems.
A new study by Stanford and the University of Chicago recommends content verification as a countermeasure to the threat of artificial intelligence-generated “deepfakes” misleading voters, as senators are pushing a bipartisan plan that would take the more draconian approach of banning “deepfakes” in political advertising.
A former intelligence community lawyer is urging private-sector legal colleagues to get involved early in the development of artificial intelligence products to help those tech firms avoid potential liability pitfalls from the training and use of these emerging technologies, which rely on massive amounts of data.
The Biden administration’s executive order on artificial intelligence is “absolutely” good industrial policy, according to former special assistant to the president for technology and competition policy Tim Wu, who recently diverged somewhat from public interest advocates in validating efforts to beat China in a race to develop the technology.
Arati Prabhakar, director of the White House Office of Science and Technology Policy, says the “landmark” executive order on artificial intelligence signed recently by President Biden is “the most significant action anyone, anywhere in the world has taken” on AI, and that the sprawling document will help “keep the entire set of risks and opportunities in frame.”
The Cybersecurity and Infrastructure Security Agency at DHS has issued a policy roadmap under President Biden’s artificial intelligence executive order that describes five areas where CISA will focus its AI efforts, both internally and in collaboration with industry partners.
The California Privacy Protection Agency has released draft rules for “automated decisionmaking technology” that allow consumers to opt out from their use, in advance of a meeting next week to review the landmark requirements before launching a formal rulemaking process next year.
Microsoft in comments to the U.S. Copyright Office cites a potential need for additional protections around “digital replicas” created by artificial intelligence tools, while arguing that current copyright law already protects content creators against infringement from training AI models.
The Consumer Technology Association is touting the guardrails and flexibility in its artificial intelligence policy framework and sees the document as providing key guidance for upcoming AI legislation.
Google has released an “opportunity agenda” for artificial intelligence that includes recommendations for government actions including the development of a research sharing network that is the focus of a leading AI legislative proposal in the Senate.
Industry organizations are the leading recipients of U.S. government research grants related to artificial intelligence, according to a “data brief” from the Center for Security and Emerging Technology at Georgetown University, which found the level of federal AI grants remained steady in recent years despite escalating interest among policymakers.
A new report from the Open Markets Institute makes the case that policymakers should prioritize the use of longstanding competition laws to minimize harm while maximizing the benefits from artificial intelligence, by laying out the extent to which a handful of industry giants maintains control over essential components of the technology.
MITRE, a prominent nonprofit security consultancy, has updated its framework focused on artificial intelligence threats to address attack pathways and vulnerabilities in generative AI and large language models such as ChatGPT.
The National Institute of Standards and Technology will be setting up a new artificial intelligence testing center under President Biden’s executive order that will be expected to set the standards for AI developers to report on the safety of their products to the government, according to digital rights advocates and administration officials.