Senate Homeland Security Chairman Gary Peters (D-MI) has scheduled a major artificial intelligence procurement bill for markup on July 24, advancing a measure that would establish the most extensive rules to date around government purchases of AI products and services.
The bipartisan bill, S. 4495, the “Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for Artificial Intelligence Act,” was introduced June 11 by Peters and Sen. Thom Tillis (R-NC). It is among 35 bills slated for markup on July 24.
The Peters-Tillis bill could be a candidate for inclusion in the annual National Defense Authorization Act expected to move across the Senate floor this fall, and would likely be the most sweeping piece of AI legislation to advance through the 118th Congress.
“Artificial intelligence has the power to reshape how the federal government provides services to the American people for the better, but if left unchecked, it can pose serious risks,” Peters said last month. “These guardrails will help guide federal agencies’ responsible adoption and use of AI tools, and ensure that systems paid for by taxpayers are being used safely and securely.”
The introduction of the PREPARED ACT drew early positive reviews from industry and civil society stakeholders.
The program would be implemented by the Office of Management and Budget and overseen by the Senate Homeland Security and Government Affairs and House Oversight committees. OMB must ensure federal agencies have implemented the bill’s requirements within one year of enactment, according to the text.
It says “the Federal Acquisition Regulatory Council shall review Federal Acquisition Regulation acquisition planning, source selection, and other requirements and update the Federal Acquisition Regulation as needed to ensure that agency procurement of artificial intelligence includes”:
(A) a requirement to address the outcomes of the risk evaluation and impact assessments required under section 8(a);
(B) a requirement for consultation with an interdisciplinary team of agency experts prior to, and throughout, as necessary, procuring or obtaining artificial intelligence; and
(C) any other considerations determined relevant by the Federal Acquisition Regulatory Council.
Further, it says, “Beginning on the date that is 1 year after the date of enactment of this Act, the head of an agency may not procure or obtain artificial intelligence for a high risk use case, as defined in section 7(a)(2)(D), prior to establishing and incorporating certain terms into relevant contracts, agreements, and employee guidelines for artificial intelligence, including”:
(i) a requirement that the use of the artificial intelligence be limited to its operational design domain;
(ii) requirements for safety, security, and trustworthiness, including—
(I) a reporting mechanism through which agency personnel are notified by the deployer of any adverse incident;
(II) a requirement, in accordance with section 5(g), that agency personnel receive from the deployer a notification of any adverse incident, an explanation of the cause of the adverse incident, and any data directly connected to the adverse incident in order to address and mitigate the harm; and
(III) that the agency has the right to temporarily or permanently suspend use of the artificial intelligence if—
(aa) the risks of the artificial intelligence to rights or safety become unacceptable, as determined under the agency risk classification system pursuant to section 7; or
(bb) on or after the date that is 180 days after the publication of the most recently updated version of the framework developed and updated pursuant to section 22(A)(c) of the National Institute of Standards and Technology Act, the deployer is found not to comply with such most recent update;
(iii) requirements for quality, relevance, sourcing and ownership of data, as appropriate by use case, and applicable unless the head of the agency waives such requirements in writing, including—
(I) retention of rights to Government data and any modification to the data including to protect the data from unauthorized disclosure and use to subsequently train or improve the functionality of commercial products offered by the deployer, any relevant developers, or others; and
(II) a requirement that the deployer and any relevant developers or other parties isolate Government data from all other data, through physical separation, electronic separation via secure copies with strict access controls, or other computational isolation mechanisms;
(iv) requirements for evaluation and testing of artificial intelligence based on use case, to be performed on an ongoing basis; and
(v) requirements that the deployer and any relevant developers provide documentation, as determined necessary and requested by the agency, in accordance with section 8(b).
The bill would codify Chief Artificial Intelligence Officers Council created under President Biden’s Oct. 30 executive order on AI.
It also includes requirements on AI incident reporting as well as on agency governance of artificial intelligence.
The bill requires agencies to establish “a risk classification system for agency use cases of artificial intelligence, without respect to whether artificial intelligence is embedded in a commercial product.”
The text says, “If an agency identifies, through testing, adverse incident, or other means or information available to the agency, that a use or outcome of an artificial intelligence use case is a clear threat to human safety or rights that cannot be adequately or practicably mitigated, the agency shall identify the risk classification of that use case as unacceptable risk.”
It explicitly prohibits certain agency uses of AI:
(1) mapping facial biometric features of an individual to assign corresponding emotion and potentially take action against the individual;
(2) categorizing and taking action against an individual based on biometric data of the individual to deduce or infer race, political opinion, religious or philosophical beliefs, trade union status, sexual orientation, or other personal trait;
(3) evaluating, classifying, rating, or scoring the trustworthiness or social standing of an individual based on multiple data points and time occurrences related to the social behavior of the individual in multiple contexts or known or predicted personal or personality characteristics in a manner that may lead to discriminatory outcomes; or
(4) any other use found by the agency to pose an unacceptable risk under the risk classification system of the agency, pursuant to section
It also authorizes AI testing programs and pilot projects, and promotes procurement innovation labs.
Ridhi Shetty of the Center for Democracy and Technology commented, "The government's procurement and use of AI must serve the public's interest. We hope the committee advances strong guardrails for federal AI practices to ensure that agencies use public dollars responsibly and minimize harm to marginalized communities."
Aaron Cooper, senior vice president of global policy at BSA-The Software Alliance, didn’t comment specifically on the PREPARED Act but said of ongoing congressional efforts on AI, “The big picture is that AI legislation is clearly starting to germinate in Congress. To the credit of members in both chambers, they spent a lot of time over the last couple years listening to different perspectives on AI, floating proposals for AI legislation, and taking feedback.”
Cooper said, “The question now is what legislation can both make a real difference and is achievable this year. BSA would most ideally want one of those issues to be legislation requiring risk management frameworks for high-risk uses of AI, though Congress seems more inclined to address discrete AI issues for now.”
Editor's Note: This story was revised to show correct markup date.
