ChatGPT has introduced new parental control features, which are now available to all users starting 30 September 2025. The update, announced on the official blog of parent company OpenAI, is designed to give families greater oversight while ensuring children have a safer and more age-appropriate experience on the platform.
How parental controls work in ChatGPT
To activate parental controls, families must have at least two ChatGPT accounts. One account acts as the parent’s account, while the other belongs to the child. The parent sends an invitation to the child’s account, which must then be accepted to complete the link. Once linked, the parent account gains access to new controls, while the child’s account is restricted from changing these settings.
Children can unlink their account, but if they do so, the parent is automatically notified. The system is designed to strike a balance between allowing independence and ensuring that parents remain informed about how their children use the service.
Features and safety limitations
The parental control system introduces a range of automatic protections. These include reduced exposure to graphic content, viral challenges, sexual or romantic roleplay, violent scenarios, and extreme beauty ideals. Parents can choose to turn off these restrictions, but they cannot adjust the child’s account.
Additional settings allow parents to configure quiet hours, which prevent ChatGPT from being used during specified times, as well as disable Voice Mode, memory, image generation, and data sharing for training purposes. These options give parents more control over how their children interact with the system.
A key part of the update is the introduction of Safety Notifications. Suppose ChatGPT detects potentially harmful behaviour from a child user, such as signs of distress or self-harm. In that case, the content is flagged for review by the platform’s internal team of trained mental health and adolescent specialists. In cases where the risk appears to be serious, parents may receive alerts via email, text message, or push notifications on their mobile device. Parents can opt out of receiving such alerts if they prefer.
OpenAI has emphasised that the system is still a work in progress. While it can recognise concerning behaviours, it does not currently escalate cases to emergency services. The blog post also does not explain how the platform might handle situations in which a child is at risk from their own parent or guardian. The company has, however, emphasised that children’s privacy will be protected, and information will only be shared when necessary.
Plans for future improvements
To support families in understanding the new tools, ChatGPT has created a dedicated resource page. This explains the parental controls and safety notifications in detail, as well as outlining the different settings available.
Looking ahead, OpenAI is exploring the possibility of an age-prediction system. Such a system could automatically detect if a user is under 18 and apply teen-appropriate settings without requiring manual setup.
By introducing parental controls and new safety features, ChatGPT aims to enhance the safety of its service for younger users while providing parents with greater peace of mind. At the same time, the company continues to refine how these systems operate to ensure a balance between privacy, autonomy, and safety.