The bipartisan leaders of the House Administration Committee are pressing the U.S. Copyright Office to complete and issue a three-part report on whether additional regulations and legislation are needed to address copyright issues raised by generative artificial intelligence.
November 4, 2024
WELCOME. LEARN ABOUT INSIDE AI POLICY →
Home Page
Home Page
Federal Energy Regulatory Commission Chairman Willie Phillips says achieving the goals of the bipartisan CHIPS and Science Act is at stake when setting policies to address the consumer and cost concerns raised by tech industry proposals to co-locate their energy hungry AI data centers at power plants, sometimes at resuscitated nuclear facilities.
The real extinction threat in the artificial intelligence space is not from a theoretical weapon of mass destruction, according to researchers at AI NOW, but rather to the use of commercial foundation models in military capabilities.
Tech giant IBM says the Bureau of Industry and Security should address two key issues in order to bring its proposed reporting rule for powerful artificial intelligence foundation models into closer harmony with President Biden’s underlying AI executive order.
The Treasury Department has “certified” that its final rule intended to restrict artificial intelligence, semiconductor and quantum information technology investments in China will have little effect on smaller entities while spelling out a series of exceptions to the prohibition on covered investments in “countries of concern.”
The National Artificial Intelligence Advisory Committee on Nov. 21 will hear from outside experts on “AI and hardware” and “AI and energy,” as the presidential advisors explore the intertwined issues of rapid innovation and growing energy demand.
The White House-released “governance” framework that accompanies the new national security memo on artificial intelligence sets out four “pillars” to guide the U.S. government’s approach to AI in security systems and is intended to complement earlier guidance from the Office of Management and Budget.
The Information Technology Industry Council says artificial intelligence policy efforts around red-teaming, advanced systems and international standards present three major “themes” that emerged from President Biden’s Oct. 30, 2023 AI executive order and related activities during what it says was “the busiest week in global AI policy.”
The Center for Data Innovation is criticizing a Biden administration national security memorandum on artificial intelligence as a missed opportunity to steer policymakers away from aggressive antitrust actions against major AI companies.
The Bureau of Industry and Security should adopt language to facilitate and protect company insiders in a rule requiring developers of dual-use AI foundation models to submit regular reports on their activities, according to nonprofit groups concerned about a broad range of risks associated with the technology.
Sen. Mike Rounds (R-SD), a member of the Senate’s bipartisan artificial intelligence working group, said the Biden administration has focused too heavily on AI risk assessment and reporting requirements at the expense of encouraging innovation, in an interview with Inside AI Policy.
The Information Technology Industry Council has brought together a diverse group of stakeholders in a letter pressing congressional leaders to authorize the U.S. AI Safety Institute, although specific legislation referenced is not endorsed by all signers.
Sen. Marsha Blackburn’s (R-TN) office is distributing a memo defending the Kids Online Safety Act approved by the Senate with overwhelming bipartisan support in July, amid increasing doubts the House will take up the bill this year as tech companies and conservative groups raise objections and concerns about government censorship.
Rep. Mike McCaul (R-TX) -- sponsor of the ENFORCE Act -- is among four lawmakers identified as “responsible AI champions” by Americans for Responsible Innovation, a relatively new lobby group backing the controversial AI licensing legislation.
Stakeholders grappling with artificial intelligence policy need to draw lessons from the cybersecurity domain and write the rules of the road now, before vulnerabilities become ingrained, to reap the benefits of AI while confronting a set of “existential” risks, says tech policy veteran Norma Krayem.
National Security Advisor Jake Sullivan told an audience of military officials and contractors that President Biden’s new artificial intelligence national security memorandum will force a major overhaul of how the Defense Department, and by extension the entire government, tests and purchases technologies, with implications throughout the increasingly tech-dependent economy.
Industry sources say the Office of Management and Budget has engaged heavily with stakeholders in implementing guidance for federal agencies on buying artificial intelligence products and services, as deadlines approach for ensuring government contracts with vendors comply with White House-directed requirements.
OpenAI is praising President Biden’s new national security memorandum on leveraging U.S. leadership on artificial intelligence to protect American foreign interests, saying the presidential directive’s call for AI guardrails supports and aligns with the company’s recently established “values” for promoting the technology’s widespread use.
The Electronic Privacy Information Center and the Recording Industry Association of America are urging the Commerce Department’s Bureau of Industry and Security to include information about material used to train frontier artificial intelligence models, as the agency finalizes reporting rules for developers of the technology.
The Computer and Communications Industry Association urges the Bureau of Industry and Security to move away from a quarterly reporting requirement and to further explain how it will protect data submitted by companies, in comments on proposed reporting rules for advanced artificial intelligence models.
Developers and deployers of artificial intelligence can get a custom risk mitigation plan for their systems by answering detailed questions on a Google website, according to the company, which is using language similar to that of the National Institute of Standards and Technology to promote the exercise.
The Chamber of Progress says the Bureau of Industry and Security’s proposed reporting rules for advanced artificial intelligence models would disadvantage open-source developers and that its “thresholds” for setting risk levels could quickly become outdated and a drag on innovation.
Biden administration officials emphasized the role of stakeholders in developing and producing a final rule to restrict “outbound investment” from the United States that could help China develop artificial intelligence and other advanced technologies posing threats to national security.
Robert Silvers, Department of Homeland Security undersecretary for strategy, policy and plans, says the United States is engaged in an artificial intelligence “arms race” with its adversaries, and that system defenders are currently winning against heightened security risks, but that the U.S.’s leadership position on this transformational technology is tenuous.
Organizations should review the needs of stakeholders beyond regulatory bodies in deciding how to release artifacts about the data and design of their artificial intelligence systems, and be prepared to simultaneously meet various transparency objectives, according to a report from the Center for Democracy and Technology.
The National Institute of Standards and Technology in a new request for information asks how awards and recognition programs can encourage participation in standards development work related to artificial intelligence and other emerging technologies, as well as on best practices for standards workforce education while also seeking general input on the national standards strategy.
Microsoft is telling a federal court that the New York Times is not entitled to “unfettered” access to internal documents about the tech giant’s plans for artificial intelligence products and services, arguing the move is simply a fishing expedition by the media company to expand its copyright infringement lawsuit.
The New York Times and other news outlets are urging a federal court to issue an order allowing the companies to inspect OpenAI’s generative artificial intelligence models as part of a landmark copyright lawsuit that could impose limits on AI developers for the data and information used to train their models.
The New York Times as part of a larger lawsuit is pushing back against efforts by Microsoft and OpenAI to consolidate the sharing of documents, or discovery, among numerous alleged copyright infringement cases against the companies in New York and California, with the media giant arguing the subtle but significant differences in the complaints.
Former Federal Communications Commission Chair Tom Wheeler warns in a new posting that “artificial superintelligence,” which he calls “a capability exponentially beyond that of today’s AI,” will come into force far more quickly than policymakers imagine and that the prospects for creating sufficient regulatory guardrails have dimmed thanks to recent Supreme Court decisions.
Election security analysts say heightened concerns about disinformation from artificial intelligence have not materialized in any significant way, so far, adding that recent requirements about transparency and the use of paper ballots are helping to mitigate cyber and other digital risks to election infrastructure in the run-up to Election Day.
The Electric Power Research Institute is requesting financial support from the electric utility industry to study the barriers and potential benefits for the use of generative artificial intelligence technologies, an initiative that comes amid heightened concerns about the grid impacts of increased power demands for data centers and AI.
A new report from the Information Technology Industry Council identifies ways in which artificial intelligence can be deployed to improve organizations’ cybersecurity along with recommendations on securing AI systems themselves.
Regulators should work across agencies and governments -- and enforce transparency -- in order to ensure interoperable artificial intelligence systems capable of supporting an innovative and accessible range of AI products and services, according to a joint report from Mozilla and the Open Markets Institute.