OpenAI rolls out an age-detection system in ChatGPT to improve child safety
OpenAI introduces age detection in ChatGPT to identify users under 18 and automatically apply stronger safety protections.
OpenAI has introduced a new age-detection feature for ChatGPT as part of its ongoing efforts to strengthen online safety for younger users. The system is designed to assess whether an account is likely to be used by someone under 18 and, if so, automatically apply additional safeguards. The company says the update responds to growing global concern about minors encountering unsuitable content through digital platforms.
Table Of Content
The feature operates quietly in the background and does not interrupt the normal user experience. OpenAI has emphasised that the goal is to protect children and teenagers while keeping ChatGPT accessible and easy to use. As artificial intelligence tools become more embedded in daily life, the company believes that safety measures must evolve alongside their capabilities.
Automatic safeguards for under-18 users
The new system uses an age-detection model to estimate whether a user is likely to be under 18. When ChatGPT identifies an account as possibly belonging to a minor, it automatically switches the user to a version of the service designed for younger audiences. This version applies stricter content filters and reduces access to material that may be harmful or inappropriate.
These safeguards are intended to cover a broad range of risks. They include limits on graphic violence, dangerous viral challenges, explicit role play, depictions of self-harm, and content that encourages extreme beauty ideals or unhealthy dieting. OpenAI has said the changes reflect an understanding that younger users may be more sensitive to certain topics and less equipped to process them safely.
Crucially, the system does not depend entirely on users stating their age honestly. OpenAI has acknowledged that age self-reporting is often unreliable, particularly among teenagers. By implementing automated protections, the company aims to ensure that safety measures are triggered even when a user has registered as an adult.
How the age-detection model identifies minors
Instead of asking for identity documents at sign-up, OpenAI’s model evaluates a mix of account-level and behavioural signals. These may include usage patterns and aspects of account history that help estimate whether a user is under 18. While the company has not disclosed the precise criteria used, it has said the process is designed to minimise disruption and respect user privacy.
When the system cannot determine a user’s age with confidence, it defaults to a more restricted experience. OpenAI has described this as a safety-first approach, ensuring that potential minors are protected even in cases of uncertainty. As a result, some adult users may temporarily face tighter content controls.
To address misclassification, OpenAI has provided a way for adults to confirm their age. Users who believe they have been wrongly identified as teenagers can complete a selfie-based age verification process. Once verified, their account is returned to the standard ChatGPT experience with full access restored.
Growing pressure on platforms to protect young users
OpenAI has said the update is informed by research showing that adolescents differ from adults in key areas such as emotional regulation, decision-making and attitudes towards risk. As generative AI becomes more powerful and widely used, the company believes it must take additional steps to reduce the risk that younger users encounter content that could negatively affect their well-being.
The move also reflects broader pressure on technology companies to demonstrate stronger child and teen safety standards. Governments, educators and parents have increasingly questioned how digital platforms shape behaviour, mental health and body image among young people. AI-driven tools that can generate highly personalised responses have added urgency to these debates.
Similar trends can be seen in public policy. In Singapore, authorities have announced that secondary school students will be barred from using mobile devices during school hours from 2026, including during recess and co-curricular activities. The decision aims to reduce digital distractions and promote healthier technology habits among teenagers.
Together, these developments suggest a shift towards built-in protections rather than optional safety settings. By embedding age-based safeguards directly into ChatGPT, OpenAI is signalling that protecting minors is a fundamental part of how AI services should be developed and deployed.





