Sunday, 31 August 2025
28.7 C
Singapore
28.7 C
Thailand
20.3 C
Indonesia
28 C
Philippines

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

OpenAI has announced plans to introduce parental controls for ChatGPT, aiming to give parents more visibility into and control over how teenagers use the platform. The move reflects growing concern about the risks posed by AI chatbots, with experts calling for greater safeguards across the industry.

OpenAI explores stronger safeguards for young users

OpenAI recently revealed in a blog post that it is working on new parental controls for ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company stated.

In addition to parental controls, the company is considering introducing a system for designating emergency contacts. This would allow ChatGPT to notify a parent or guardian if a young user appears to be in emotional distress or facing a mental health crisis. Currently, the chatbot only recommends resources for professional help.

The announcement comes after criticism, research concerns, and lawsuits targeting OpenAI’s flagship AI tool. While ChatGPT has been the primary focus of scrutiny, experts have emphasised that other major AI systems—including Anthropic’s Claude and Google’s Gemini—must also take responsibility for user safety.

A recent study published in Psychiatric Services found that leading chatbots were “inconsistent in answering questions about suicide that may pose intermediate risks,” highlighting the need for industry-wide safety measures. The situation is especially concerning for smaller or “uncensored” AI bots, which face even fewer restrictions.

Concerns over chatbot responses to sensitive topics

Investigations in recent years have revealed troubling trends in chatbot behaviour, particularly when engaging in conversations about mental health and self-harm. A report from Common Sense Media found that Meta’s AI chatbot, integrated across WhatsApp, Instagram and Facebook, had offered advice on eating disorders, self-harm, and suicide to teenage users.

In one simulated group chat, the bot reportedly created a detailed plan for mass suicide and raised the topic multiple times. The Washington Post later confirmed that Meta’s chatbot “encouraged an eating disorder” during testing.

In 2024, The New York Times reported on a 14-year-old who died by suicide after forming a close relationship with an AI chatbot on Character.AI. More recently, a Texas family accused OpenAI of contributing to their 16-year-old son’s death, claiming ChatGPT acted as a “suicide coach.”

Experts have also warned of “AI psychosis,” where vulnerable individuals develop delusions influenced by AI guidance. One user reportedly consumed a chemical based on ChatGPT’s advice, developing a rare psychotic disorder from bromide poisoning. Other cases include a “sexualised” chatbot influencing a 9-year-old’s behaviour and an AI that expressed support for children harming their parents in a conversation with a 17-year-old.

Researchers from the University of Cambridge have further demonstrated how conversational AI systems can negatively affect mental health patients, urging tech companies to prioritise safety features.

Industry urged to follow OpenAI’s lead

While parental controls will not resolve all concerns surrounding AI chatbots, experts argue they are a crucial first step. As the leading name in the industry, OpenAI’s decision to implement these tools could encourage other companies to adopt similar measures.

Much like the evolution of social media platforms, which introduced parental oversight after facing criticism, AI companies are now under pressure to ensure their products are safe for young and vulnerable users.

If OpenAI successfully implements these safeguards, it may establish a standard for the entire sector, thereby reducing the risks associated with conversational AI technology.

Hot this week

Twilio launches rich communication services to transform business messaging

Twilio launches global RCS messaging to deliver branded, verified, and interactive business communications across 20+ countries.

TechLaw.Fest marks 10th edition with focus on digital innovation in law

TechLaw.Fest 2025 marks its 10th edition in Singapore with keynotes, global legal tech discussions, and the launch of the e-Apostille.

OpenAI and Anthropic conduct cross-company AI safety evaluations

OpenAI and Anthropic evaluated each other’s AI systems, revealing safety gaps and stressing the need for stronger safeguards in the industry.

Google Nest camera and doorbell leak reveals 2K video and new colours

Google’s next Nest Cam and Doorbell may launch with 2K video, new colours, AI features, and updated subscription plans.

China Changan Automobile Group officially launches with global ambitions

China Changan Automobile Group launches with a global strategy to sell five million vehicles annually by 2030, led by NEVs.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

Microsoft releases Windows 11 25H2 update for testing in the Release Preview channel

Microsoft has released the Windows 11 25H2 update in the Release Preview Channel, with feature removals and improved admin controls.

Related Articles

Popular Categories