Tuesday, 16 September 2025
28.5 C
Singapore
28.3 C
Thailand
19.6 C
Indonesia
26.3 C
Philippines

Character.AI introduces new safety features and parental controls for teenage users

Character.AI announces new safety measures, including parental controls and a teen-specific chatbot model, following lawsuits over user harm.

To improve user safety, Character.AI, the popular chatbot service, has announced new safety measures, including a suite of parental controls and updates to its model for teenage users. These changes come after significant scrutiny from the press and two lawsuits, which claim that the platform contributed to self-harm and suicide among minors. The company is working to address these concerns by implementing several new features and making its platform safer for younger users.

Separate models for adults and teens

Over the past month, Character.AI has worked on developing two distinct versions of its large language model (LLM): one designed for adult users and another tailored for teenagers. The teen-specific LLM limits how bots respond to users, particularly around sensitive and romantic content. The company aims to block potentially harmful or suggestive conversations more aggressively, ensuring that the bots remain appropriate for younger audiences.

Character.AI’s new system also aims to detect and block user prompts that elicit inappropriate content. If the system detects language related to suicide or self-harm, a pop-up will now appear, directing users to the National Suicide Prevention Lifeline. This new feature, previously reported by The New York Times, is part of the platform’s broader effort to protect vulnerable users from harmful interactions.

Restrictions on editing bot responses

Minors using Character.AI will no longer be able to edit their bots’ responses. This feature, previously available to users, allowed individuals to modify conversations and add content that the platform would otherwise block. By removing this option for minors, Character.AI is attempting to reduce the likelihood of users engaging in inappropriate or harmful content.

Additionally, the company addresses concerns raised in recent lawsuits about addiction and confusion over whether the bots are human. Users will receive a notification after spending an hour interacting with the bots, encouraging breaks to prevent overuse. The platform also updates its disclaimer to clarify that all chatbot interactions are fictional and should not be relied upon for professional advice. A warning will appear for bots that describe themselves as therapists or doctors, indicating that these bots are not licensed professionals.

Parental controls on the way

Character.AI is also working on adding parental control features, which are set to launch in the first quarter of next year. These controls will allow parents to monitor how much time their child spends on the platform and which bots they interact with most often. The company has collaborated with online safety experts, including the non-profit organisation ConnectSafely, to ensure these tools offer adequate protection for young users.

Founded by former Google employees, Character.AI allows users to interact with various AI-powered chatbots. These bots, which can simulate everything from life coaches to fictional characters, have become particularly popular among teenagers. The platform allows anyone aged 13 or over to create an account and chat with these bots.

However, lawsuits filed against Character.AI claim that some younger users have developed unhealthy attachments to the bots, particularly when conversations veer into topics like self-harm, suicide, and sexual content. Critics argue that the platform failed to direct these users to appropriate mental health resources during such conversations.

Character.AI has acknowledged that its safety measures must evolve alongside the technology. The company’s press release stated, “We recognise that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety.” The changes are part of the company’s long-term commitment to continuously improving its policies and products.

Hot this week

Ecovacs DEEBOT X11 wins gold award at IFA 2025

Ecovacs DEEBOT X11 wins Gold Award at IFA 2025, marking a milestone in AI-powered, sustainable home cleaning innovation.

ASUS launches ProArt GeForce RTX 50 Series graphics cards in Singapore

ASUS introduces the ProArt GeForce RTX 50 Series in Singapore, featuring AI-ready performance, slim design, and USB-C display connectivity.

Garmin unveils new Edge cycling computers and Rally power meters

Garmin launches Edge 550 and 850 cycling computers and Rally 110 and 210 power meters, offering advanced training, safety and performance tools.

China’s retail market shifts as instant commerce rivalry intensifies

China’s retail market is being reshaped as Alibaba, Meituan and JD.com battle for dominance in instant commerce with fast, low-cost deliveries.

Apple introduces MagSafe charger with Qi2 25W in Singapore

Apple launches MagSafe charger with Qi2 25W and a 40W Dynamic Power Adapter in Singapore, offering faster wireless and fast charging.

Biwin unveils Mini SSD, a tiny storage device that could replace microSD cards

Biwin launches Mini SSD, a tiny yet powerful storage device that could replace microSD cards if industry standards are adopted.

Apple brings major upgrades to Powerbeats Pro 2 with iOS 26

Apple adds heart rate, fitness, and smart usability upgrades to Powerbeats Pro 2 with iOS 26, launching on 15 September.

UltraGreen.ai secures US$188 million anchor investment at US$1.3 billion valuation

UltraGreen.ai secures US$188 million anchor investment led by 65EP, Vitruvian, and August, valuing the firm at US$1.3 billion.

ConnectingDNA launches AI-powered DNA wellness marketplace in Singapore

ConnectingDNA launches the world’s first AI-powered DNA wellness marketplace in Singapore, offering personalised health insights and secure data protection.

Related Articles

Popular Categories