John Clocker, deputy chief administrative officer for the House of Representatives, offered governance, guidance, training, and expanding the use of “validated” generative AI large-language model technology as key elements for integrating artificial intelligence into congressional operations and services, a priority list that tracks with risk management approaches at federal agencies and within industry.
“Using information obtained through the NIST-based governance assessment, data collected from its legal and cyber reviews of GenAI LLM technologies, and House usage information collected through the AI Advisory Group, the CAO will work with the Committee to establish a responsible path forward for continued GenAI LLM House integration,” Clocker testified at the House Administration Committee’s Jan. 30 hearing on AI innovations and uses in the House.
He said, “The CAO believes there are three critical components to successful integration: 1. Establishing governance and security controls 2. Developing guidance and training (upskilling) opportunities for House users. 3. Expanding House-validated GenAI LLM technology.”
John Clocker, Deputy Chief Administrative Officer at U.S. House of Representatives
The House Administration Committee, which oversees internal operations of the chamber and related agencies and offices, heard from Clocker; Taka Ariga of the Government Accountability Office; Judith Conklin of the Library of Congress; and Hugh Halpern, director of the Government Publishing Office.
Clocker in his presentation cited an “AI governance assessment” conducted last fall using the AI risk management framework issued last year by the National Institute of Standards and Technology, saying, “CAO’s Office of Cybersecurity anticipates developing a new House Information Security Policy regulating GenAI LLM integration and use within the House environment.”
On guidance and training he said, “The House needs to establish guidance and upskilling opportunities so staff can correctly optimize use of the GenAI LLM tools. Staff need principle-based and technical training. They will need House-wide forums to share and obtain best practices.”
Clocker said, “The CAO also recommends offices adopt internal AI use policies that dictate specifically how the technology is used based on each Member’s preference. To assist with these policies, the CAO could work with the Committee to develop a model AI use policy.”
Takeaways
House Administration Chairman Bryan Steil (R-WI) in a release after the hearing pegged the CAO’s work to tailor the NIST AI framework to the legislative branch’s needs as one of the “key takeaways” from the hearing.
The release quoted Clocker in response to a question from Rep. Barry Loudermilk (R-GA), saying, “I'm glad you recognize this is a very difficult environment sometimes. We are using the NIST framework for AI. We think it's a good framework. It's very thoughtful and we're going to tailor it for the challenging environment here in the House of Representatives.”
Clocker said, "When we do that we will be working with this committee, we're gonna be working with the Sergeant at Arms, we're going to work with the Clerk, and other officers of the House because they all use the same framework when we adopt cyber policies."
Steil’s release also pointed to Clocker’s remarks on congressional offices’ use of ChatGPT+ to produce “that first draft and then giving it to a human to go from there, first draft of testimony, first draft of witness questions, first draft of a speech.”
Clocker said, "The pitfall is what we've heard here … It does hallucinate, it's also very confident right that it knows what it's talking about. It sounds very confident, even though it is hallucinating, and so we will address that through training and how to use the tools effectively."
Steil also flagged his exchange with the Library of Congress’ Conklin on uses in the copyright area.
"As I mentioned we do have a significant historical data that includes historical copyrighting, and we are currently forming experiment records...we have a current experiment with the Copyright Office on their historical records,” Conklin told the panel.
"We have a significant amount of historical data and we are currently experimenting with the Copyright Office historical data, and that will make the data more discoverable on those digitized records that exist today,” she said.
"The differences between digitizing the records and making them discoverable via AI is a machine can search much faster using AI technologies, and that's what our hope is that researchers can use that data for the research,” Conklin said.
The Copyright Office plans to report this year on its inquiry into whether new legislation and regulation is needed to address issues related to AI.
Ranking member Joe Morelle (D-NY) also hit on Library of Congress issues in his opening statement, saying it “is using AI to create standardized records from eBooks, extract data from historic documents, and help blind and print disabled patrons access library resources. At the same time, the Copyright Office is grappling with how it should consider registration applications for AI produced works.
Morelle noted other current uses at the Government Publishing Office, and by the House Chief Administrative Office itself. Further, he said, the Government Accountability Office “has established a lab to design and implement new AI technologies, as well as an internal working group to analyze AI governance issues. All of these are critically important, and all very exciting.”
Morelle said, “AI can simplify complex tasks, provide insights into data, build capacity, improve workflows, and more. But for all the exciting opportunities that AI presents, we must also be cognizant of its threats and risks.”