Inside AI Policy

June 28, 2024

AI Daily News

Peters-Tillis AI procurement bill prohibits handful of uses, gives agencies power to ban others

By Charlie Mitchell / June 13, 2024

Legislation setting artificial intelligence procurement standards, expected to move through the Senate Homeland Security Committee this summer, spells out three AI uses that would be prohibited across the federal government while giving individual agencies the power to ban additional uses found to pose “unacceptable risks.”

Senate Homeland Security Chairman Gary Peters (D-MI) and Sen. Thom Tillis (R-NC) on June 11 introduced the “Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act,” with a Homeland Security and Governmental Affairs Committee markup expected this summer, according to a committee aide.

The bill says, “No agency may develop, procure or obtain, or use artificial intelligence for:”

  1. mapping facial biometric features of an individual to assign corresponding emotion and potentially take action against the individual;
  2. categorizing and taking action against an individual based on biometric data of the individual to deduce or infer race, political opinion, religious or philosophical beliefs, trade union status, sexual orientation, or other personal trait;
  3. evaluating, classifying, rating, or scoring the trustworthiness or social standing of an individual based on multiple data points and time occurrences related to the social behavior of the individual in multiple contexts or known or predicted personal or personality characteristics in a manner that may lead to discriminatory outcomes; or
  4. any other use found by the agency to pose an unacceptable risk under the risk classification system of the agency, pursuant to section 7.

Section 7 of the bill, “Agency risk classification of artificial intelligence use cases for procurement and use,” requires federal agency heads to develop “a risk classification system for agency use cases of artificial intelligence, without respect to whether artificial intelligence is embedded in a commercial product.”

The bill says it “shall, at a minimum, include unacceptable, high, medium, and low risk classifications.”

It includes a detailed description of what constitutes high-risk uses in decision making, which are “presumed to serve as a principal basis for a decision or action that has a legal, material, binding, or similarly significant effect, with respect to an individual or community” in areas such as civil rights and privacy and access to services, or for safety and security-related decision making.

But the bill also says, “If an agency identifies, through testing, adverse incident, or other means or information available to the agency, that a use or outcome of an artificial intelligence use case is a clear threat to human safety or rights that cannot be adequately or practicably mitigated, the agency shall identify the risk classification of that use case as unacceptable risk.”

CDT on Peters-Tillis approach to ‘unacceptable risk’

Ridhi Shetty, policy counsel for the Privacy & Data Project at the Center for Democracy and Technology, commented, “Use cases will certainly vary by sector, but the bill's approach to risk classification reflects the priorities that should drive how every agency identifies, evaluates, and addresses risk. In any sector, if an AI use case presents risks to people’s rights and safety that are not adequately mitigated, those risks are deemed unacceptable and the bill prohibits the agency from that use.”

Shetty, whose digital rights group has endorsed the bill, said, “If the risks that AI presents to people’s rights and safety are insufficiently mitigated, agencies would not be permitted to use the AI until they complete testing, establish internal processes for monitoring and addressing risks, and document their use cases and risk mitigation processes.”

Further, Shetty said, “Agencies' risk classification processes must be included in their AI use case inventories. The bill also requires reports submitted to relevant congressional committees, including reporting on whether agencies are classifying risks of their AI use cases appropriately.”

Shetty noted that prohibited uses in the bill “are very similar to the EU AI Act. Both prohibit the use of AI to evaluate or classify people based on data related to their social behavior or their known or predicted personal characteristics in ways that may lead to discriminatory outcomes.”

She explained, “While both prohibit the use of AI to infer a person’s emotion, the EU AI Act does so specifically with respect to workplaces and educational institutions, while the PREPARED for AI Act generally prohibits inferring emotion and using this inference to take action against the person. Both also prohibit the use of biometric systems to infer personal characteristics -- here again, the PREPARED for AI Act prohibits using that inference to take action against a person.”

Shetty said, “The EU AI Act enumerates other prohibited uses, whereas the PREPARED for AI Act generally prohibits using AI that the agency has found to pose unacceptable risk.”