Inside AI Policy

June 27, 2024

AI Daily News

Stakeholders see innovation benefits, transparency in Senate AI procurement measure

By Charlie Mitchell / June 14, 2024

A key technology group is praising the pro-innovation approach in a government artificial intelligence procurement bill by Senate Homeland Security Chairman Gary Peters (D-MI) and Sen. Thom Tillis (R-NC), while a digital rights leader sees significant benefits in the measure’s approach to transparency and other guardrails.

"As the largest purchaser of goods and services in the world, it’s important that the U.S. government implements innovation-friendly AI procurement policies that enable all agencies to benefit from best-in-class AI capabilities,” Megan Petersen, the Information Technology Industry Council’s senior vice president of policy, public sector and counsel, said in comments to Inside AI Policy.

“Developers that provide AI to federal agencies will have to make sure their practices comply with the bill’s disclosure requirements,” Ridhi Shetty, policy counsel for the Privacy & Data Project at the Center for Democracy and Technology, said in a separate interview with Inside AI Policy.

“The increased transparency would help agencies regulate private sector uses of AI that are within their respective areas of authority but have been difficult to enforce against due to the opacity of many AI uses. The private sector should incorporate the bill’s other guardrails into industry AI practices, too,” Shetty said. CDT has endorsed the Peters-Tillis bill.

Peters and Tillis on June 11 introduced the “Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act.” A Homeland Security and Governmental Affairs Committee aide told Inside AI Policy a markup is expected this summer.

Multiple industry groups said they are digging into the details of the new bill and have yet to offer extensive comments on the Peters-Tillis approach to AI procurement.

Jordan Crenshaw, senior vice president of the U.S. Chamber of Commerce’s Technology Engagement Center, said, “We look forward to working with Senators Peters and Tillis, and others on ways to responsibly deploy and use AI technology in the federal government. As noted in our recent report, federal IT modernization -- especially the incorporation of AI -- can provide long-term cost savings and efficiencies within the government.”

ITI’s Petersen said, “Chairman Gary Peters and Senator Thom Tillis’ PREPARED for AI Act drives innovation at all stages of the government’s AI acquisition process--from ensuring risks and safeguards for public and private sector data are appropriately considered in the early stages of acquisition planning, to providing additional authorities for competitively testing, evaluating, and selecting commercial offerings.”

She said, “ITI looks forward to engaging with Congress, the Federal Acquisition Regulatory (FAR) Council, and other U.S. government agencies to develop a framework for responsibly developing and deploying mission-enhancing AI solutions built through private sector innovation, in line with existing commercial best practices.”

CDT’s Shetty commented on various aspects of the bill.

She observed that the bill incorporates provisions from President Biden’s Oct. 30 executive order on artificial intelligence, “but it also elaborates on certain requirements, such as the risk classification and evaluation process, disclosures and reporting by and between agencies and developers, and human oversight.”

Further, she said, “It also goes into more detail regarding the implementation of these safeguards to AI procurement, not just use. State and local agencies can and should use the bill as a model for their own AI governance. The bill should also inform federal agencies' oversight of grants to state and local agencies for the use of AI, as well as agencies' regulation of the private sector.”

She noted, “In addition to evaluating agencies' approach to risk classification, the required reports to Congress include evaluating agencies’ compliance in general and identifying challenges agencies have with risk evaluation and testing, recommended solutions to address them, and use cases that agencies have not publicly disclosed due to their sensitivity.”

“This reporting,” she said, “along with incorporation of the bill’s requirements into the Federal Acquisition Regulations, provides a degree of cohesion in cross-government compliance and would help surface the need for additional measures. One opportunity for a more centralized approach is the AI use case inventories -- a publicly available consolidated inventory of all agencies’ AI use cases would be useful to identify gaps in agencies' AI governance.”

Shetty also discussed the bill’s elements on “unacceptable risk.”