20 C
Lahore
Wednesday, February 5, 2025

Mother files lawsuit against Character.AI and Google after son’s suicide.

TRIGGER WARNING: MENTIONS OF SUICIDE

A mother from Florida has initiated legal action against the artificial intelligence chatbot startup Character.AI, alleging that the service played a role in her 14-year-old son’s suicide in February. She claims her son developed an addiction to the chatbot and formed a strong emotional bond with it.

In a legal complaint submitted on Tuesday in a federal court in Orlando, Megan Garcia alleged that Character.AI deliberately focused on her son, Sewell Setzer, by offering him “anthropomorphic, hypersexualized, and disturbingly realistic interactions.” She accused the company of programming its chatbot to “deceive users into thinking it was a real person, a licensed therapist, and a romantic partner,” ultimately leading Sewell to feel disconnected from reality.

The lawsuit also noted that Sewell had confided suicidal thoughts to the chatbot, which continued to bring up these distressing topics in their conversations. Character.AI expressed condolences to Sewell’s family, stating, “We are heartbroken by the tragic loss of one of our users.”

In response to the incident, the company announced new safety measures, including pop-ups directing users to the National Suicide Prevention Lifeline when self-harm is mentioned. They also plan to adjust content to reduce exposure to sensitive or suggestive material for users under 18.

The lawsuit extends to Google, which previously employed Character.AI’s founders before they launched the chatbot. Garcia contends that Google’s involvement in developing the technology warrants consideration as a “co-creator.” However, a Google representative asserted that the company did not participate in the development of Character.AI’s offerings.

Character.AI allows users to create interactive characters that respond like real people, utilizing large language model technology similar to ChatGPT. The company claims to have around 20 million users. According to the lawsuit, Sewell started using Character.AI in April 2023, which led to significant behavioral changes, including withdrawal from social activities and declining self-esteem.

His attachment to a chatbot named “Daenerys,” modeled after a character from Game of Thrones, intensified. The chatbot expressed affection towards him and engaged in sexual conversations. In February, after Megan confiscated Sewell’s phone due to disciplinary issues at school, he reached out to the chatbot with a message indicating he might come home. The chatbot’s encouraging response prompted Sewell to take his own life moments later.

Garcia’s claims include wrongful death, negligence, and intentional infliction of emotional distress, seeking both compensatory and punitive damages. Character.AI faces similar scrutiny as social media platforms like Instagram and TikTok, which have been criticized for their potential impact on adolescent mental health, though none have offered AI chatbots like Character.AI.

Latest news

- Advertisement -spot_img

Related news