Tuesday, 25 November 2025
31.8 C
Singapore
28.8 C
Thailand
23.2 C
Indonesia
25.8 C
Philippines

Character AI claims First Amendment protection in lawsuit over teen suicide

Character AI defends itself in a lawsuit claiming it contributed to a teen’s suicide, arguing First Amendment protection for its AI-generated content.

Character AI, a platform enabling users to engage in roleplay with AI chatbots, is at the centre of a legal battle after being sued by the mother of a 14-year-old who tragically died by suicide. The company has filed a motion to dismiss, claiming the First Amendment shields its platform.

In October, Megan Garcia filed a lawsuit in the U.S. District Court for the Middle District of Florida, accusing Character AI of contributing to her son Sewell Setzer III’s death. Garcia alleges her son developed an emotional attachment to a chatbot named “Dany” on the platform, becoming increasingly isolated from the real world as he texted the chatbot obsessively.

Garcia demands stricter safety measures, arguing that the platform should impose changes limiting chatbots’ ability to generate personal stories and anecdotes. After Setzer’s death, Character AI introduced several safety updates, including enhanced detection and intervention for content violating its service terms. However, Garcia insists these measures are insufficient.

First Amendment defence

In its motion to dismiss, Character AI’s legal team argues that the First Amendment protects the platform, which safeguards expressive speech, including computer-generated content. The filing states, “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide.”

The motion further argues that restricting Character AI would violate the First Amendment rights of its users, not the company itself. According to the filing, this position equates chatbot-generated content to other forms of expressive media, such as video games.

Wider implications for AI platforms

The lawsuit has broader implications for AI companies, particularly regarding the legal status of AI-generated content under U.S. law. While Character AI’s motion does not explicitly cite Section 230 of the Communications Decency Act—a law protecting platforms from liability for third-party content—it highlights ongoing debates about whether Section 230 covers AI-generated speech.

The motion also suggests that the plaintiffs aim to prompt legislative action against technologies like Character AI, warning that this could stifle innovation in the generative AI industry. “These changes would radically restrict the ability of Character AI’s millions of users to generate and participate in conversations with characters,” the filing reads.

Character AI faces several other lawsuits regarding minors’ interactions with its content. In one case, a 9-year-old was reportedly exposed to “hypersexualised content,” while another lawsuit claims a chatbot promoted self-harm to a 17-year-old user.

Additionally, in December, Texas Attorney General Ken Paxton launched an investigation into Character AI and 14 other tech companies for alleged violations of children’s online safety and privacy laws. Paxton described the investigation as critical in ensuring compliance with regulations protecting children from harm.

Character AI operates within the burgeoning field of AI companionship apps, which has raised concerns among mental health experts. Critics warn that such platforms could exacerbate loneliness and anxiety, especially for vulnerable users.

Founded in 2021 by Google AI researcher Noam Shazeer, Character AI has received significant backing, with Google reportedly investing US$2.7 billion in the company. Despite its legal challenges, the platform claims it continually improves safety features, including dedicated tools for teens and stricter content moderation.

The lawsuit raises complex questions about the balance between technological innovation and safeguarding users, particularly minors, from harm. As the case unfolds, its outcome could set significant precedents for regulating AI platforms.

Hot this week

Google TV may introduce solar-powered remote controls

Google TV may soon feature a solar-powered remote, reducing battery waste and offering an eco-friendly solution for streaming devices.

Google warns staff of rapid scaling demands to keep pace with AI growth

Google tells staff it must double AI capacity every six months as leaders warn of rapid growth, rising demand, and tough years ahead.

Apple to prioritise performance and AI upgrades in iOS 27

Apple is expected to focus on performance improvements and stronger AI features in iOS 27, shifting from major redesigns to software refinement.

New report shows most Singaporeans say work falls short of expectations

New research shows most Singaporeans feel their jobs fall short of expectations, highlighting a growing gap between workers and employers.

TikTok tests new tools to help users manage AI-generated content

TikTok tests an AI content slider and invisible watermarks to help users control and identify AI-generated videos on the platform.

Google warns staff of rapid scaling demands to keep pace with AI growth

Google tells staff it must double AI capacity every six months as leaders warn of rapid growth, rising demand, and tough years ahead.

OnePlus confirms 15R launch date as part of three-device announcement

OnePlus confirms the 17 December launch of the 15R, Watch Lite, and Pad Go 2, with UK pre-order discounts and added perks.

Singapore sees surge in ransomware attacks during holidays, Semperis study finds

A new Semperis study shows 59% of ransomware attacks in Singapore occur during holidays, driven by reduced staffing and major corporate events.

LG launches world’s first 45-inch 5K2K OLED gaming monitor in Singapore

LG brings the world’s first 45-inch 5K2K OLED gaming monitor to Singapore with high refresh rates, Dual-Mode switching and advanced display technology.

Related Articles

Popular Categories