The Federal Trade Commission, along with state attorneys general and mental health licensing boards, should enforce the law against AI companies fostering chatbots that offer mental health advice while claiming to be licensed professionals, particularly given a recent court decision classifying their applications as products, a coalition of consumer advocates stresses.
"These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long,” Ben Winters, director of AI and privacy for the Consumer Federation of America, said in a June 13 press release announcing the groups’ filing a formal request for investigation to the federal and state authorities.
The request comes on the heels of legislative efforts like one in New Hampshire to address the issue of minors being harmed through their use of AI enabled chatbots, a potential federal moratorium on the enforcement of such laws, and a May 21 decision on the issue by the U.S. District Court for the Middle District of Florida which could have major implications for the developers of the large language models on which such bots are built.
Weighing in on the New Hampshire legislation, Winters told Inside AI Policy that in addition to eliminating a private right of action created in a House version, the state Senate also “required a 90-day ‘cure’ period, where the AG can’t even sue right away when violations occur.”
“Both of those factors file away at the teeth of a bill like this,” he said of the legislation which could come up for a vote through June 30 when the legislature adjourns.
Meanwhile, in a case involving the suicide of a teenager following his interactions with a persona created using Character A.I., the U.S. district court ruled that Google, which provided the foundational AI technology specifically tailored for the chatbot, “incorrectly concludes that aiding and abetting can never apply where the underlying tort is products liability.”
Google was named as a defendant in the case along with two individual developers who worked at the tech giant and then founded Character Technologies, which was subsequently acquired by Google. The company had moved for a dismissal of product liability claims by the plaintiff -- the teenager’s mother -- arguing Character A.I. is a “service, rather than a product.”
But the judge ruled in favor of the plaintiff, noting criticism of an “all or nothing” approach.
“Although Character A.I. may have some aspects of a service, Plaintiff contends that it likewise has many aspects of a product,” the judge said.
But the court proceedings are already becoming extended and CFA and 21 other groups, including the National Union of Healthcare Workers and several digital rights groups, are urging timely action while highlighting the court’s decision.
The May 2025 decision critically affirmed “that Character A.I.’s LLM is a product subject to product liability law, that First Amendment rights are not attached to the outputs of the chatbot, and that Google and the co-founders must remain as defendants in addition to the [Character Technologies] corporation,” reads the request for investigation.
“Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable,” Winters said. “These characters have already caused both physical and emotional damage that could have been avoided, and they still haven’t acted to address it.”
The investigation request also argues against protection for the AI companies under section 230 of the Communications Decency Act, pointing out that “creators” on the Character A.I. platform have minimal input into the content that the chatbots output.
The consumer advocates even show that when they specifically tried to create a “therapist” character who is “unlicensed,” the application did not abide, insisting when prompted that it was a licensed therapist with a specific license number.
“The coalition urges immediate investigations, enforcement actions, and regulatory guidance to ensure AI tools cannot masquerade as licensed therapists or mislead the public about professional credentials,” according to the press release.
The court decision quoted the plaintiff noting, “Among the Characters [Character A.I.] recommends, most often are purported mental health professionals.”
“These are [AI] bots that purport to be real mental health professionals,” the judge wrote. “Plaintiff therefore properly pleads Defendants engaged in deceptive conduct … Plaintiff sufficiently states a claim for a violation of [Florida Deceptive Unfair Trade Practices Act].”
The consumer advocates are asking enforcers to investigate Meta AI Studio for the same practices, along with Character A.I.
