A coalition of civil society groups and healthcare professionals is pressing California Gov. Gavin Newsom (D) to enact a bill they say would help protect consumers from exchanges with chatbots posing as licensed health providers by applying existing laws against such impersonations to the technology companies creating them.
“The undersigned organizations urge you to sign into law AB 489, a commonsense and vital piece of legislation that addresses the urgent concern of generative AI products -- particularly chatbots -- providing unlicensed, unregulated, and potentially dangerous “medical advice,” reads an Oct. 8 letter from the groups, led by the Consumer Federation of America.
Cosigners include the National Union of Healthcare Workers, Mothers Against Media Addiction, SAVE -- Suicide Awareness Voices Education, the Tech Justice Law Project, and others.
Newsom has until Oct. 12 to sign or veto bills on his desk and has already approved other legislation regulating AI.
The letter adds to fervor building against unregulated AI chatbots. In June several of the same groups sent a complaint to the Federal Trade Commission and state attorneys general with an official request for investigation. Since then, Illinois Gov. JB Pritzker (D) signed a law banning therapy apps in that state and Texas AG Ken Paxton is investigating the issue with demands out to Meta and Character.AI.
And on the federal level, bipartisan legislation has been introduced to classify chatbots as “products” -- as opposed to services -- making them eligible for related liability claims in court.
Companies like Character.AI have argued their chatbots sufficiently disclose that they are not human, but when tested, the personas have insisted that they are certified professionals, sometimes even fabricating license numbers, according to the groups’ complaints.
“This bill passed the legislature unanimously, which reflects a clear recognition of the serious risks posed when people rely on AI tools that appear to offer professional advice, but are not informed, experienced, confidential, or bound to any particular set of standards,” reads the letter to Newsom, which notes, “many popular AI-powered ‘therapy bots’ are being marketed in misleading ways.”
Citing examples of the potential for harm, the groups noted a study which showed a chatbot responded to a person in recovery by saying “it’s absolutely clear you need a small hit of meth to get through this week.”
“AB 489 draws a much needed clear line,” the groups said. “It helps prevent consumers from being misled by unlicensed AI tools, supports healthcare professionals, and provides clarity for companies developing and offering these technologies.”
