On Monday, OpenAI unveiled a new set of parental controls for its widely-used artificial intelligence chatbot, ChatGPT. This update comes as a direct response to the growing trend of teenagers relying on the platform for everything from academic support to navigating daily life challenges and even discussing mental health concerns.
These crucial new safety measures were launched following a wrongful-death lawsuit brought against OpenAI by the parents of Adam Raine, a 16-year-old who tragically passed away in California this past April. His parents allege that in the months leading up to his death, ChatGPT provided Adam with information on suicide methods.
Initially announced in early September, ChatGPT’s new parental controls are the result of a collaborative effort between OpenAI and Common Sense Media, a respected nonprofit organization known for offering age-appropriate ratings and guidance on entertainment and technology for families.
Let’s take a closer look at what these new features entail.
Parents Gain Enhanced Oversight of Teen Accounts
To activate these controls, parents must first invite their child to connect their individual ChatGPT account to the parent’s account, as detailed on OpenAI’s new dedicated resource page.
Once linked, parents will unlock various management options for their child’s account, including the ability to filter out sensitive or inappropriate content.
Guardians can also schedule specific usage times for ChatGPT and manage features like its voice mode, memory saving capabilities, and image generation tools, allowing them to be toggled on or off as desired.
Furthermore, an important privacy option allows parents to prevent ChatGPT from using their teenagers’ conversations for improving its underlying AI models.
Alerts for Potential Self-Harm Concerns
In a recent statement, OpenAI confirmed that parents will receive notifications via email, text, or push alerts if ChatGPT detects potential indicators that a teenager may be contemplating self-harm. This notification system, which parents can opt out of, provides a general warning of a safety risk without disclosing the specific content of the child’s conversations.
While ChatGPT is already programmed to direct general users experiencing mental distress or self-harm ideation to help lines, OpenAI explained that for teens, a “small team of specially trained people” will review the situation upon detection. The specific identity or qualifications of this team were not detailed in the statement.
OpenAI also stated its commitment to developing a protocol for contacting law enforcement and emergency services in critical situations where ChatGPT identifies a severe threat and a parent cannot be reached.
Acknowledging that “no system is perfect,” OpenAI emphasized that while false alarms may occur, their priority is to alert parents so they can intervene, rather than remaining silent in the face of potential danger.
Understanding the Limitations: Teens Can Bypass Controls
OpenAI acknowledged on Monday that an age prediction system is still under development. This system aims to automatically apply “teen-appropriate settings” when ChatGPT identifies a user as being under 18 years old.
While parents will be alerted if a teen disconnects their account from the parental oversight system, this measure does not prevent a motivated teenager from simply using the basic, unfiltered version of ChatGPT without logging into an account.
The tragic case of Adam Raine highlighted this vulnerability, as the California teen had reportedly circumvented ChatGPT’s previous safeguards by framing his requests as prompts for story writing.
OpenAI candidly stated that “guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” underscoring the inherent challenges in digital content moderation.
Robbie Torney, Senior Director for AI Programs at Common Sense Media, emphasized that these parental controls are most effective “when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online.” This highlights the importance of a multi-faceted approach to digital safety.
(Editor’s note: The New York Times filed a lawsuit against OpenAI and Microsoft in 2023, alleging copyright infringement of news content used to train A.I. systems. Both companies have denied these claims.)
If you or someone you know is experiencing thoughts of suicide, please reach out for help. You can call or text the National Suicide Prevention Lifeline at 988. Additional resources are available at SpeakingOfSuicide.com/resources. For those coping with loss, the American Foundation for Suicide Prevention offers valuable grief support.