Former Rep. Jerry McNerney (D-CA), a senior policy advisor at the Pillsbury law firm who once headed the Congressional Artificial Intelligence Caucus, is preparing to launch an AI trade association focused on developing and promoting standards that eventually could be codified into federal law.
The new group will be rolled out on Oct. 10, McNerney told Inside AI Policy on Friday, with resource and legal support from Pillsbury.
“Pillsbury has the resources and oomph to do this,” he commented, adding that the group will include both large and small tech companies, academics and others. It will start out with an unpaid board of directors.
Former Rep. Jerry McNerney (D-CA
The name of the trade association has yet to be announced, he said, but the group will work on developing AI standards and eventually lobbying for their adoption by Congress.
“My vision is for the private sector to work with NIST on standards” addressing data and other issues ranging from “watermarks” to identify AI-generated content to concerns over how AI is used in employment decisions, he said. The new group will collaborate with the National Institute of Standards and Technology to develop new standards where needed, McNerney said, and will eventually look for Congress to put those AI standards into law.
“The NIST-industry collaboration has to happen and we have to accelerate this,” McNerney said. NIST issued an AI risk management framework in January, building on its years-long experience in developing and updating cybersecurity standards.
“Sooner is better than later,” McNerney said, while making clear that he’s not predicting doomsday scenarios around AI. “I’m not in the dire camp that the world will end if we don’t act today,” he quipped.
In fact, McNerney cautioned that Congress isn’t yet in position to craft effective AI legislation. First, he said, industry and NIST should work together to develop the appropriate standards.
“In Congress a lot of people are groping around to find something that works, or to grab headlines,” McNerney said. “The level of understanding is low and I’m not too optimistic here.”
He pointed to the long-running failure to pass a federal privacy law and cautioned that “privacy has to be at the foundation of AI policy.”
Further, he said, “I’m concerned about the impact of anything Congress can produce now, the unintended consequences,” citing as an example Section 230 of the 1996 Communications Decency Act providing liability protection for tech companies. “One small section of a big bill had such an impact. What Congress produces now [on AI] could be problematic.”
McNerney said this rapidly evolving technology “needs to be corralled and regulated,” but that the United States needs a standards-based approach to accomplish this. “We need NIST as a partner,” he said, and ultimately “we need a national standard” on AI.
He also downplayed the prospects for creating a new federal agency to set enforceable rules on AI, commenting that it isn’t realistic given the likely Republican opposition and saying “we can work with” existing regulatory bodies.
McNerney has recently posted a series of “insights” addressing AI and other tech issues on the Pillsbury website. He works from the firm’s San Francisco office.