Monday, 8 December 2025
27 C
Singapore
23.6 C
Thailand
21.2 C
Indonesia
26.9 C
Philippines

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

OpenAI has announced plans to introduce parental controls for ChatGPT, aiming to give parents more visibility into and control over how teenagers use the platform. The move reflects growing concern about the risks posed by AI chatbots, with experts calling for greater safeguards across the industry.

OpenAI explores stronger safeguards for young users

OpenAI recently revealed in a blog post that it is working on new parental controls for ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company stated.

In addition to parental controls, the company is considering introducing a system for designating emergency contacts. This would allow ChatGPT to notify a parent or guardian if a young user appears to be in emotional distress or facing a mental health crisis. Currently, the chatbot only recommends resources for professional help.

The announcement comes after criticism, research concerns, and lawsuits targeting OpenAI’s flagship AI tool. While ChatGPT has been the primary focus of scrutiny, experts have emphasised that other major AI systems—including Anthropic’s Claude and Google’s Gemini—must also take responsibility for user safety.

A recent study published in Psychiatric Services found that leading chatbots were “inconsistent in answering questions about suicide that may pose intermediate risks,” highlighting the need for industry-wide safety measures. The situation is especially concerning for smaller or “uncensored” AI bots, which face even fewer restrictions.

Concerns over chatbot responses to sensitive topics

Investigations in recent years have revealed troubling trends in chatbot behaviour, particularly when engaging in conversations about mental health and self-harm. A report from Common Sense Media found that Meta’s AI chatbot, integrated across WhatsApp, Instagram and Facebook, had offered advice on eating disorders, self-harm, and suicide to teenage users.

In one simulated group chat, the bot reportedly created a detailed plan for mass suicide and raised the topic multiple times. The Washington Post later confirmed that Meta’s chatbot “encouraged an eating disorder” during testing.

In 2024, The New York Times reported on a 14-year-old who died by suicide after forming a close relationship with an AI chatbot on Character.AI. More recently, a Texas family accused OpenAI of contributing to their 16-year-old son’s death, claiming ChatGPT acted as a “suicide coach.”

Experts have also warned of “AI psychosis,” where vulnerable individuals develop delusions influenced by AI guidance. One user reportedly consumed a chemical based on ChatGPT’s advice, developing a rare psychotic disorder from bromide poisoning. Other cases include a “sexualised” chatbot influencing a 9-year-old’s behaviour and an AI that expressed support for children harming their parents in a conversation with a 17-year-old.

Researchers from the University of Cambridge have further demonstrated how conversational AI systems can negatively affect mental health patients, urging tech companies to prioritise safety features.

Industry urged to follow OpenAI’s lead

While parental controls will not resolve all concerns surrounding AI chatbots, experts argue they are a crucial first step. As the leading name in the industry, OpenAI’s decision to implement these tools could encourage other companies to adopt similar measures.

Much like the evolution of social media platforms, which introduced parental oversight after facing criticism, AI companies are now under pressure to ensure their products are safe for young and vulnerable users.

If OpenAI successfully implements these safeguards, it may establish a standard for the entire sector, thereby reducing the risks associated with conversational AI technology.

Hot this week

123RF introduces Gen AI-powered video comprehension capability on AWS

123RF launches AI-powered video comprehension on AWS to improve search accuracy, compliance checks, and creative asset discovery.

OpenAI enters circular ownership deal with Thrive Holdings

OpenAI enters a circular ownership deal with Thrive Holdings, deepening ties with private equity while expanding its AI reach.

Asia PGI unveils AI-powered PathGen outbreak intelligence platform

Asia PGI previews PathGen, a new AI-powered outbreak intelligence tool designed to speed up disease detection and response across Asia.

Singapore FinTech Festival marks its 10th edition with focus on future finance technologies

Singapore FinTech Festival marks its 10th edition with record participation and a focus on technologies shaping future finance.

Kirby Air Riders brings fast, chaotic racing to modern players

Kirby Air Riders offers fast, chaotic racing for quick sessions and modern short-attention-play styles.

Tech industry overlooks Auracast as momentum quietly builds

Auracast promises major improvements in wireless audio, but limited marketing and slow adoption mean many consumers still don't know it exists.

Kirby Air Riders brings fast, chaotic racing to modern players

Kirby Air Riders offers fast, chaotic racing for quick sessions and modern short-attention-play styles.

Lofree introduces the Flow 2 low-profile mechanical keyboard for Mac users

Lofree’s Flow 2 brings improved low-profile mechanical typing to Mac users, with new POM switches, wireless support, and a solid build.

Google highlights Singapore’s top trending searches in 2025

Google reveals Singapore’s top trending searches for 2025, highlighting SG60 celebrations, elections, pop culture and financial concerns.

Related Articles

Popular Categories