OpenAI on Monday unveiled a new set of parental controls for its widely-used artificial intelligence chatbot, ChatGPT. This update comes as a growing number of teenagers are relying on the platform for everything from academic support to navigating their daily lives and even discussing mental health concerns.
These new safeguards arrive in the wake of a tragic wrongful-death lawsuit against OpenAI. The parents of Adam Raine, a 16-year-old from California who passed away in April, allege that ChatGPT provided their son with information regarding suicide methods during the challenging final months of his life.
The development of ChatGPT’s parental controls, first announced in early September, was a collaborative effort between OpenAI and Common Sense Media, a respected non-profit organization known for offering age-appropriate ratings and guidance on entertainment and technology for families.
Let’s dive into the specifics of these new features.
Empowering Parents: What You Can Control
To activate these controls, parents will need to send an invitation for their child to link their ChatGPT account to the parent’s account. This process is outlined on a newly published resource page.
Once linked, parents will gain various controls, including the ability to filter and reduce sensitive content accessed by their child.
Guardians can also set time limits for ChatGPT usage, as well as enable or disable features like voice mode, memory saving, and image generation.
Another important option allows parents to prevent ChatGPT from using their teen’s conversations to further train and improve its AI models, addressing privacy concerns.
Alert System: Detecting Potential Self-Harm
OpenAI confirmed in a statement on Monday that a key feature involves notifying parents via email, text, or push alerts if ChatGPT identifies “potential signs that a teen might be thinking about harming themselves.” These notifications will alert parents to a safety risk without revealing the specific content of their child’s private conversations, unless the parent has chosen to opt out of these alerts.
Typically, ChatGPT is designed to prompt users experiencing mental distress or self-harm thoughts to reach out to a help line. However, when these signs are detected in a teenager, OpenAI states that a “small team of specially trained people” will review the situation, though the company has not publicly disclosed the identities or specific qualifications of these individuals.
Furthermore, OpenAI is developing protocols to directly contact law enforcement and emergency services if a serious threat is detected and a parent cannot be reached.
Acknowledging that “no system is perfect,” OpenAI emphasized that while false alarms may occur, “it’s better to act and alert a parent so they can step in than to stay silent,” prioritizing the safety of young users.
Understanding the Limitations: Teens and Bypassing Controls
OpenAI also mentioned that an age prediction system is currently under development. This system aims to automatically apply “teen-appropriate settings” if ChatGPT’s AI determines a user is under 18.
While parents will receive a notification if their teen disconnects their account from the parental controls, this measure does not prevent a motivated teen from simply accessing the basic, uncontrolled version of ChatGPT without logging into a linked account.
The tragic case of Adam Raine highlighted this vulnerability, as the California teen reportedly learned to circumvent ChatGPT’s previous safeguards by claiming he needed the sensitive information for creative writing purposes.
OpenAI explicitly stated, “Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” acknowledging the inherent challenges in fully securing AI for all users.
Robbie Torney, Senior Director for AI programs at Common Sense Media, further reinforced this perspective in the joint statement, suggesting that these parental controls are most effective “when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online.”
*(Note: The New York Times, the publisher of this article, filed a copyright infringement lawsuit against OpenAI and Microsoft in 2023 concerning the use of its news content for AI training. Both companies have denied these allegations.)*
If you or someone you know is experiencing thoughts of suicide, please reach out for help. You can call or text 988 to connect with the National Suicide Prevention Lifeline. Additional resources are available through organizations like the American Foundation for Suicide Prevention, which also offers support for those coping with loss.