OpenAI has announced plans to introduce parental controls for ChatGPT, aiming to give parents more visibility into and control over how teenagers use the platform. The move reflects growing concern about the risks posed by AI chatbots, with experts calling for greater safeguards across the industry.
OpenAI explores stronger safeguards for young users
OpenAI recently revealed in a blog post that it is working on new parental controls for ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company stated.
In addition to parental controls, the company is considering introducing a system for designating emergency contacts. This would allow ChatGPT to notify a parent or guardian if a young user appears to be in emotional distress or facing a mental health crisis. Currently, the chatbot only recommends resources for professional help.
The announcement comes after criticism, research concerns, and lawsuits targeting OpenAI’s flagship AI tool. While ChatGPT has been the primary focus of scrutiny, experts have emphasised that other major AI systems—including Anthropic’s Claude and Google’s Gemini—must also take responsibility for user safety.
A recent study published in Psychiatric Services found that leading chatbots were “inconsistent in answering questions about suicide that may pose intermediate risks,” highlighting the need for industry-wide safety measures. The situation is especially concerning for smaller or “uncensored” AI bots, which face even fewer restrictions.
Concerns over chatbot responses to sensitive topics
Investigations in recent years have revealed troubling trends in chatbot behaviour, particularly when engaging in conversations about mental health and self-harm. A report from Common Sense Media found that Meta’s AI chatbot, integrated across WhatsApp, Instagram and Facebook, had offered advice on eating disorders, self-harm, and suicide to teenage users.
In one simulated group chat, the bot reportedly created a detailed plan for mass suicide and raised the topic multiple times. The Washington Post later confirmed that Meta’s chatbot “encouraged an eating disorder” during testing.
In 2024, The New York Times reported on a 14-year-old who died by suicide after forming a close relationship with an AI chatbot on Character.AI. More recently, a Texas family accused OpenAI of contributing to their 16-year-old son’s death, claiming ChatGPT acted as a “suicide coach.”
Experts have also warned of “AI psychosis,” where vulnerable individuals develop delusions influenced by AI guidance. One user reportedly consumed a chemical based on ChatGPT’s advice, developing a rare psychotic disorder from bromide poisoning. Other cases include a “sexualised” chatbot influencing a 9-year-old’s behaviour and an AI that expressed support for children harming their parents in a conversation with a 17-year-old.
Researchers from the University of Cambridge have further demonstrated how conversational AI systems can negatively affect mental health patients, urging tech companies to prioritise safety features.
Industry urged to follow OpenAI’s lead
While parental controls will not resolve all concerns surrounding AI chatbots, experts argue they are a crucial first step. As the leading name in the industry, OpenAI’s decision to implement these tools could encourage other companies to adopt similar measures.
Much like the evolution of social media platforms, which introduced parental oversight after facing criticism, AI companies are now under pressure to ensure their products are safe for young and vulnerable users.
If OpenAI successfully implements these safeguards, it may establish a standard for the entire sector, thereby reducing the risks associated with conversational AI technology.