Saturday, 18 October 2025
26.8 C
Singapore
28.1 C
Thailand
26 C
Indonesia
28 C
Philippines

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

OpenAI has announced plans to introduce parental controls for ChatGPT, aiming to give parents more visibility into and control over how teenagers use the platform. The move reflects growing concern about the risks posed by AI chatbots, with experts calling for greater safeguards across the industry.

OpenAI explores stronger safeguards for young users

OpenAI recently revealed in a blog post that it is working on new parental controls for ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company stated.

In addition to parental controls, the company is considering introducing a system for designating emergency contacts. This would allow ChatGPT to notify a parent or guardian if a young user appears to be in emotional distress or facing a mental health crisis. Currently, the chatbot only recommends resources for professional help.

The announcement comes after criticism, research concerns, and lawsuits targeting OpenAI’s flagship AI tool. While ChatGPT has been the primary focus of scrutiny, experts have emphasised that other major AI systems—including Anthropic’s Claude and Google’s Gemini—must also take responsibility for user safety.

A recent study published in Psychiatric Services found that leading chatbots were “inconsistent in answering questions about suicide that may pose intermediate risks,” highlighting the need for industry-wide safety measures. The situation is especially concerning for smaller or “uncensored” AI bots, which face even fewer restrictions.

Concerns over chatbot responses to sensitive topics

Investigations in recent years have revealed troubling trends in chatbot behaviour, particularly when engaging in conversations about mental health and self-harm. A report from Common Sense Media found that Meta’s AI chatbot, integrated across WhatsApp, Instagram and Facebook, had offered advice on eating disorders, self-harm, and suicide to teenage users.

In one simulated group chat, the bot reportedly created a detailed plan for mass suicide and raised the topic multiple times. The Washington Post later confirmed that Meta’s chatbot “encouraged an eating disorder” during testing.

In 2024, The New York Times reported on a 14-year-old who died by suicide after forming a close relationship with an AI chatbot on Character.AI. More recently, a Texas family accused OpenAI of contributing to their 16-year-old son’s death, claiming ChatGPT acted as a “suicide coach.”

Experts have also warned of “AI psychosis,” where vulnerable individuals develop delusions influenced by AI guidance. One user reportedly consumed a chemical based on ChatGPT’s advice, developing a rare psychotic disorder from bromide poisoning. Other cases include a “sexualised” chatbot influencing a 9-year-old’s behaviour and an AI that expressed support for children harming their parents in a conversation with a 17-year-old.

Researchers from the University of Cambridge have further demonstrated how conversational AI systems can negatively affect mental health patients, urging tech companies to prioritise safety features.

Industry urged to follow OpenAI’s lead

While parental controls will not resolve all concerns surrounding AI chatbots, experts argue they are a crucial first step. As the leading name in the industry, OpenAI’s decision to implement these tools could encourage other companies to adopt similar measures.

Much like the evolution of social media platforms, which introduced parental oversight after facing criticism, AI companies are now under pressure to ensure their products are safe for young and vulnerable users.

If OpenAI successfully implements these safeguards, it may establish a standard for the entire sector, thereby reducing the risks associated with conversational AI technology.

Hot this week

Samsung Galaxy XR headset details revealed ahead of expected launch

Samsung’s Galaxy XR headset leak reveals dual 4K displays, Snapdragon XR2+ Gen 2 chip, and a rumoured 22 October launch.

Mintegral reveals key 2025 app economy trends as AI and short drama reshape growth

Mintegral reports AI apps, short-form drama, and third-party Android stores are transforming APAC’s mobile growth landscape.

New study reveals rise of ‘AI natives’ shaping customer and workplace expectations in Asia Pacific

A Zoom study highlights the rise of ‘AI natives’ in Asia Pacific, revealing their growing impact on customer experience and workplace expectations.

Singaporean workers shoulder thousands in job-related expenses amid reimbursement delays

Many Singaporean workers spend thousands of dollars out of pocket on work expenses each year due to delayed reimbursements.

OPPO to launch Find X9 Series globally, redefining mobile photography

OPPO will launch the Find X9 Series globally on 28 October, introducing breakthrough mobile imaging, powerful performance, and refined design.

IPI Singapore: Enabling SMEs to scale through digital transformation and innovation partnerships

IPI Singapore shows how SMEs can scale through innovation, partnerships, and digital transformation to compete globally.

Semperis unveils cyberwar documentary spotlighting global defenders and reformed hackers

Semperis unveils Midnight in the War Room, a documentary revealing the human stories behind the global fight against cyber threats.

TeamViewer integrates AI-driven workplace solutions with Salesforce Agentforce IT Service

TeamViewer integrates AI-powered DEX and remote connectivity with Salesforce Agentforce IT Service to boost IT efficiency and reliability.

New study reveals rise of ‘AI natives’ shaping customer and workplace expectations in Asia Pacific

A Zoom study highlights the rise of ‘AI natives’ in Asia Pacific, revealing their growing impact on customer experience and workplace expectations.

Related Articles