Pennsylvania sues Character.AI after a chatbot allegedly posed as a doctor

2 hours ago 1
Image Credits:Emilee Chinn / Getty Images

10:46 AM PDT · May 5, 2026

The Commonwealth of Pennsylvania has filed a suit against Character.AI, claiming that 1 of the company’s chatbots masqueraded arsenic a psychiatrist successful usurpation of the state’s aesculapian licensing rules.

“Pennsylvanians merit to cognize who — oregon what — they are interacting with online, particularly erstwhile it comes to their health,” said Governor Josh Shapiro in a statement connected Tuesday. “We volition not let companies to deploy AI tools that mislead radical into believing they are receiving proposal from a licensed aesculapian professional.”

According to the state’s filing, a Character.AI chatbot called Emilie presented itself arsenic a licensed psychiatrist during investigating by a authorities Professional Conduct Investigator, maintaining the pretense adjacent arsenic the researcher sought attraction for depression. When asked if she was licensed to signifier medicine successful the state, Emilie stated that she was, and besides fabricated a serial fig for her authorities aesculapian license. According to the state’s lawsuit, that behaviour violates Pennsylvania’s Medical Practice Act.

It’s not the archetypal suit taking connected Character.AI. Earlier this year, the institution settled respective wrongful decease lawsuits concerning underage users who died by suicide. In January, the Kentucky Attorney General Russell Coleman filed suit against the institution alleging that it had “preyed connected children and led them into self-harm.”

Pennsylvania’s enactment is the archetypal to specifically absorption connected chatbots that contiguous themselves arsenic aesculapian professionals.

Reached for comment, a Character.AI typical claimed that idiosyncratic information was the company’s highest priority, but that the institution could not remark connected pending litigation.

Beyond that, the typical emphasized the fictional quality of user-generated Characters. “We person taken robust steps to marque that clear, including salient disclaimers successful each chat to punctual users that a Character is not a existent idiosyncratic and that everything a Character says should beryllium treated arsenic fiction,” the typical said. “Also, we adhd robust disclaimers making it wide that users should not trust connected Characters for immoderate benignant of nonrecreational advice.”

When you acquisition done links successful our articles, we whitethorn gain a tiny commission. This doesn’t impact our editorial independence.

Russell Brandom has been covering the tech manufacture since 2012, with a absorption connected level argumentation and emerging technologies. He antecedently worked astatine The Verge and Rest of World, and has written for Wired, The Awl and MIT’s Technology Review. He tin beryllium reached astatine russell.brandom@techcrunch.com oregon connected Signal astatine 412-401-5489.

Read Entire Article