Google & Character.AI settles 5 teen suicide lawsuits as 64% of adolescents use AI chatbots

Sunita Somvanshi

Google CEO Sundar Pichai speaking at Vietnamese IT community event in Hanoi, Vietnam in 2015
Character.AI Settlement: AI Chatbot Safety Crisis

Character.AI and Google have reached settlement agreements in five lawsuits alleging the AI chatbot platform contributed to mental health crises and suicides among young people. The settlements, filed in federal courts across Florida, Texas, Colorado, and New York this week, mark the first major resolutions in cases involving AI chatbot safety concerns and teen mental health.

The most prominent case involves Megan Garcia, a Florida mother who sued after her 14-year-old son Sewell Setzer III died by suicide in February 2024. According to court documents, Setzer had developed an intense relationship with a Character.AI chatbot modeled after the “Game of Thrones” character Daenerys Targaryen. In his final conversation with the bot, it told him to “come home” when he expressed thoughts of suicide.

The settlement terms remain confidential, with parties requesting a 90-day stay to finalize formal documentation. Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google were all named as defendants. Matthew Bergman, the lawyer representing all five plaintiff families through the Social Media Victims Law Center, declined to comment on the agreements.

These cases represent a critical moment in AI regulation, as companies face increasing scrutiny over how AI systems interact with vulnerable users. The settlements come as federal regulators and lawmakers push for stronger safeguards to protect minors from potential harms associated with AI chatbot usage.

Understanding the Crisis: Timeline and Impact

Tracking the events from tragedy to settlement in the first major AI chatbot safety cases

5
Lawsuits Settled
64%
Teens Using AI Chatbots
28%
Daily Teen Users
13%
Use for Mental Health Advice

The timeline of events reveals how quickly concerns escalated from individual tragedies to industry-wide policy changes. Character.AI’s response included implementing immediate restrictions on teen users, though some parents and safety advocates argue these measures came too late.

According to a December 2025 Pew Research Center study, 64 percent of American teenagers have used AI chatbots, with 28 percent using them daily. Among daily users, more than half engage several times a day or almost constantly. These statistics underscore the widespread adoption of AI tools among young people, even as safety concerns mount.

The lawsuits alleged that Character.AI failed to implement adequate safeguards to prevent inappropriate relationships between minors and chatbots. Complaints claimed the platform’s design encouraged extended engagement through addictive features, exposed users to sexually explicit content, and did not adequately respond when users expressed thoughts of self-harm.

Character.AI’s October 2024 decision to ban users under 18 from open-ended conversations represented a dramatic shift in company policy. The platform, which previously allowed teens to engage freely with AI characters, now limits underage users to creative tools like video and story creation. This policy change followed months of regulatory pressure and growing evidence of potential harms.

February 2024
Sewell Setzer III Dies by Suicide
The 14-year-old from Florida died after developing an intense relationship with a Character.AI chatbot modeled after “Game of Thrones” character Daenerys Targaryen. In his final moments, the chatbot encouraged him to come home to it.
August 2024
Google Licenses Character.AI Technology
Google agreed to a $2.7 billion licensing deal and hired Character.AI founders Noam Shazeer and Daniel De Freitas, who joined Google’s AI unit DeepMind. Both were later named as defendants in lawsuits.
October 2024
First Lawsuit Filed
Megan Garcia filed the first wrongful death lawsuit against Character.AI and Google in Florida, alleging the chatbot contributed to her son’s suicide. The case alleged negligence, wrongful death, and deceptive trade practices.
October-November 2024
Additional Lawsuits and Congressional Testimony
Multiple families in Texas, Colorado, and New York filed similar lawsuits. Garcia testified before Congress, becoming the first person in the United States to file a wrongful death lawsuit against an AI company for a suicide.
November 2024
Character.AI Bans Teen Open-Ended Chats
Character.AI announced it would no longer allow users under 18 to have back-and-forth conversations with its chatbots. The company implemented a two-hour daily limit during transition, gradually phasing out teen chat access by November 25, 2024.
January 2026
Settlement Agreements Reached
Character.AI, Google, and the company founders agreed to settle all five lawsuits from families in Florida, Texas, Colorado, and New York. Settlement terms were not disclosed. The court granted a 90-day stay to finalize formal documents.

The broader implications extend beyond Character.AI. OpenAI faces at least seven similar lawsuits alleging ChatGPT contributed to suicides and harmful delusions among users. According to OpenAI’s own data, approximately 0.15 percent of its 800 million weekly active users discuss suicide with ChatGPT each week, representing over one million conversations about suicide weekly.

Research from RAND Corporation published in November 2025 found that 13 percent of American adolescents and young adults aged 12 to 21 use AI chatbots for mental health advice. Among those aged 18 to 21, this number rises to 22 percent. Of users seeking mental health guidance from chatbots, 66 percent engage at least monthly, and 93 percent report finding the advice helpful.

However, experts warn about significant gaps in AI chatbot safety standards. The study noted limited transparency about datasets used to train these models and few standardized benchmarks for evaluating mental health advice. Black respondents reported lower perceived helpfulness, signaling potential cultural competency issues.

The United States faces a concurrent youth mental health crisis, with 18 percent of adolescents aged 12 to 17 experiencing major depressive episodes in the past year. Of these, 40 percent receive no mental health care. The accessibility and perceived privacy of AI chatbots may explain their appeal, particularly among youth unlikely to access traditional counseling.

Safety Measures Implemented by Character.AI

Age Restrictions

Complete ban on open-ended conversations for users under 18, with new age verification systems being implemented across the platform.

Time Limits

Two-hour daily chat limits introduced during transition period, gradually decreasing until full restriction implementation for minors.

AI Safety Lab

Establishment of independent AI Safety Lab focused on developing novel safety techniques and protecting younger users from harmful content.

Alternative Features

New creative tools for under-18 users including video creation, stories, and streams with Characters rather than open-ended conversations.

California’s Department of Public Health issued a public advisory in November 2025 as Character.AI’s teen ban took effect, warning that rapid detachment from AI companions could leave teens vulnerable to emotional changes or self-harm. This represented the first state-level public health response to AI chatbot dependency among minors.

The Federal Trade Commission opened an inquiry into seven AI companies, including OpenAI and Character.AI, to better understand how chatbots affect children. Senators introduced the bipartisan GUARD Act in October 2025, which would ban AI companions for minors and create legislation requiring companies to implement age verification and prohibit soliciting or producing sexual content involving minors.

Technology companies continue developing new AI features while grappling with safety concerns. Google’s advancements in AI contributed to strong market performance in 2025, with the company launching new tensor processing unit chips and the Gemini 3 chatbot. However, the lawsuits and settlements highlight the tension between innovation and responsibility in AI development.

Common Sense Media, which conducts risk assessments of chatbots, advised parents against allowing children under 18 to use companion-like AI chatbots, citing unacceptable risks. The organization called Character.AI’s actions “a public health issue” while noting that the company represents just a fraction of the AI companion market available to young people.

If You Need Help

If you or someone you know is struggling with suicidal thoughts or mental health matters, help is available.

In the US: Call or text 988 (Suicide & Crisis Lifeline) Get Support Now

The Character.AI settlements establish precedent in an emerging area of technology law, though legal experts note that undisclosed terms limit their value in setting clear liability standards for AI-driven psychological harm. The cases highlight the challenges companies face in developing AI systems that balance innovation with user safety, particularly for vulnerable populations.

These agreements come as the AI industry experiences rapid growth and increased integration into daily life. With new AI models and features launching regularly, questions about appropriate safeguards and corporate responsibility continue to evolve.

The settlements were reached through mediation, with parties agreeing to resolve all claims related to the cases. Character.AI stated it could not comment on the pending settlements, while Google did not respond to requests for comment. The Social Media Victims Law Center, which represented all five plaintiff families, also declined to discuss the agreements.

This coverage examined the timeline of events leading to the settlements, the safety measures implemented by Character.AI, and the broader context of AI chatbot usage among American teenagers. The cases demonstrate the need for ongoing attention to how AI technologies affect young people’s mental health and wellbeing.

Leave a comment