The Consumer Financial Protection Bureau has come out forcefully against suspending the enforcement of foundational laws in the interest of incentivizing the innovative use of artificial intelligence and warned against relying on “black-box” systems, in a public comment process that highlights its departure from the agency’s policies under the Trump administration.
“The CFPB has previously experimented with offering ‘sandboxes’ and other forms of special regulatory treatment to individual firms to foster innovation,” the agency wrote in comments posted Aug. 13 responding to a request for information on AI policy issued by the Treasury Department.
Treasury requested the comments in line with President Biden’s Oct. 30 executive order to promote the safe, secure and trustworthy development and use of artificial intelligence and received 105 submissions in response from a wide range of stakeholders, including trade associations for the tech and financial industry as well as consumer protection and civil rights groups.
“The CFPB has learned important lessons from these [sandbox] programs, which fell short of their intended purpose of encouraging pro-consumer innovation in financial markets,” read the comments which link to action the agency took during the Trump administration.
In September 2019, then CFPB director Kathleen L. Kraninger promised to protect entities from liability under the Truth in Lending Act, the Electronic Fund Transfer Act, and the Equal Credit Opportunity Act while they tested financial products or services in areas of regulatory uncertainty.
The idea has since caught on with some key lawmakers as a way to incentivize experimentation with artificial intelligence toward simultaneously improving consumer protection and economic growth, with “sandboxes” at the center of bipartisan, bicameral legislation introduced by Sens. Mike Rounds (R-SD) and Martin Heinrich (D-NM), who are members of Senate Majority Leader Charles Schumer’s (D-NY) AI working group.
Groups like the National Fair Housing Alliance and the Consumer Federation of America have noted the potential for AI to be used in creating less discriminatory alternatives to systems that overly rely on flawed traditional inputs which reflect biases that could lead to discrimination of protected classes in violation of laws like ECOA. They’ve called for written guidance from CFPB as entities also aim to comply with rules on responsible lending under the Dodd-Frank Act, amid narratives pitting fairness against accuracy.
But Adam Rust, director of financial services for the consumer federation told Treasury “The use of regulatory sandboxes as a tool for cultivating advancements in financial services is a mistaken approach.” And Michael Akinwumi, chief AI officer for the NFHA and a Rita Allen Civic Science fellow told Inside AI Policy such sandboxing — which suspends consumer protection and other laws — would inappropriately and unnecessarily treat consumers “like guinea pigs.”
CFPB agreed. “The CFPB believes that innovation need not be at odds with compliance with federal consumer protection laws,” the agency wrote. “Indeed, innovation is fostered when regulators ensure that all market participants adhere to the same set of rules and compete on a level playing field. Yet in their effort to encourage innovation, the previous CFPB programs sometimes waived (or at least purported to waive) important consumer protections provided by Congress, including protections from discrimination.”
Other commenters in the Treasury docket also highlighted differences between the CFPB under Biden and the CFPB under Trump while calling for the agency to “reconcile” inconsistencies in its position on the extent to which entities using AI to approve loans, for example, must be able to explain their decisions to consumers when the result is an “adverse action.”
The explainability issue is crucial as entities, including those beyond the financial sector, increasingly employ AI models that are beyond even their own understanding to make decisions that are highly consequential for individuals.
“The CFPB’s recent circular declares that a creditor must not only disclose the factor that resulted in adverse action, but must also provide more specificity about the factor,” read comments from the American Bankers Association citing a March 2023 publication. “For example, according to the circular, it would be insufficient for the creditor to state ‘purchasing history’ or ‘disfavored business patronage’ as the principal reason for adverse action, without more detail, such as the business location, the type of goods purchased, and so on.”
“In contrast, the official commentary [regarding Regulation B] states that a creditor may provide a reason such as ‘age of collateral’ even if such a factor’s relationship to credit worthiness may not be clear to the applicant,” ABA wrote citing a July 2020 document which CFPB has labelled “incomplete.”
Still ABA wrote, “the CFPB’s varying statements on a creditor’s responsibility have resulted in confusion and may discourage creditors from beneficial use of AI. The CFPB should clarify its expectations and should coordinate with other regulators so that there is a level playing field for all creditors.”
CFPB’s comments on the issue in the docket were certainly more definitive.
“Courts have held in other contexts that a firm’s decision to use algorithmic, machine-learning, or other types of automated decision-making tools can itself be a policy that produces bias prohibited under civil rights laws,” the agency wrote. “This logic also applies with respect to compliance with the Equal Credit Opportunity Act.”
CFPB said the agency “has provided guidance on the use of black-box credit models, making clear that lenders must provide accurate and specific reasons when they deny credit or take other adverse actions against a consumer, regardless of the complexity or opacity of their models or the use of ‘artificial intelligence.’”
“If firms cannot manage using a new technology in a lawful way, then they should not use the technology,” CFPB said.
