OpenAI has officially launched new parental control features for ChatGPT, responding to growing concerns about minor users. These new safeguards arrive weeks after the company initially announced their development, prompted by a tragic incident involving a teenager who confided in the AI chatbot before committing suicide. The introduction of these controls is a significant step in OpenAI’s broader initiative to shield children and teenagers from potentially harmful or inappropriate AI responses. Furthermore, OpenAI is actively developing an advanced AI-powered system designed to detect users under 18 and automatically apply age-appropriate settings.
ChatGPT Enhances Safety with New Parental Controls
In a recent blog post, the AI powerhouse confirmed the release of these new parental controls on ChatGPT. The features are currently available to all users accessing ChatGPT via the web globally, with plans for a swift expansion to mobile applications. Details regarding their availability on desktop apps are yet to be announced.
Introducing parental controls in ChatGPT.
Now parents and teens can link accounts to automatically get stronger safeguards for teens. Parents also gain tools to adjust features & set limits that work for their family.
Rolling out to all ChatGPT users today on web, mobile soon. pic.twitter.com/kcAB8fGAWG
— OpenAI (@OpenAI) September 29, 2025
Setting up these new parental controls is flexible, allowing either the parent or the minor to initiate the process. One party sends an invitation to the other, which must then be accepted to establish the account link. Once linked, parents gain the ability to manage their teenager’s ChatGPT settings directly from their own account. A crucial safeguard built into the system is a notification sent to the parent if the minor decides to unlink their account.
Upon successful account linking, parents will discover a dedicated control page offering new features to moderate the AI’s responses for their children. Parents have the flexibility to restrict specific options or apply blanket restrictions as they deem appropriate.
Key among the new controls are “Quiet Hours,” empowering parents to designate specific periods when ChatGPT will be unavailable for use. Parents can also disable Voice Mode, Memory, and block image generation capabilities. Furthermore, there’s an option for parents to opt out of model training, ensuring their children’s conversations are not utilized to enhance OpenAI’s AI models.
For all linked teen accounts, enhanced safety restrictions will be activated by default. These measures are designed to significantly reduce exposure to graphic content, viral challenges, sexual or romantic roleplay, violent content, and the promotion of extreme beauty ideals. While parents can disable these default settings, minors do not have the ability to adjust them themselves.
Beyond these usage controls, OpenAI has also implemented a new notification system to assist parents during situations where a minor might be experiencing emotional distress. If ChatGPT identifies potential indicators that an underage user may be contemplating self-harm, it will escalate these messages to a team of human reviewers. This team can then alert parents through email, text message, and push notifications on their phones. This critical notification system will only be inactive if the parent has previously opted out of receiving such alerts.
Addressing concerns about potential false alarms, OpenAI stated, “We are actively collaborating with mental health and teen experts to meticulously design this system, as getting it right is paramount. No system is flawless, and we acknowledge that we might occasionally trigger an alarm when no genuine danger exists. However, we believe that taking action and notifying a parent to intervene is far better than remaining silent.”
OpenAI is also working to define clear processes and circumstances under which the company would directly contact law enforcement or other emergency services. This extreme measure would only be taken if the system detects an immediate threat to a user’s life and if parents cannot be reached. The AI firm emphasized that even in such critical situations, only the minimal necessary information required to ensure the teenager’s safety would be shared.
The company highlighted its extensive collaboration with various experts, advocacy groups such as Common Sense Media, and policymakers throughout the development of these crucial safeguards for minors.