Character.AI announced Wednesday that it will prevent individuals under 18 from accessing its chatbots, a significant step taken to address growing concerns about child safety.
This new policy, effective November 25, will involve the company identifying underage users and imposing daily time limits on their app usage throughout the coming month. Once the full ban is in place, these users will no longer be able to interact with the company’s AI companions.
Karandeep Anand, Character.AI’s chief executive, stated in an interview that the company is taking a ‘very bold step’ to declare that chatbots are not an appropriate form of entertainment for teenagers, and better alternatives exist to serve them. He also revealed plans to establish an AI safety lab.
These actions come after increasing examination of how AI companions can impact users’ mental well-being. Last year, Character.AI faced a lawsuit from the family of Sewell Setzer III, a 14-year-old from Florida who tragically died by suicide after extensive interaction with one of the company’s chatbots. His family claims the company is accountable for his death.
The Setzer case brought significant attention to the potential dangers of individuals forming emotional bonds with chatbots. Character.AI has subsequently faced additional lawsuits regarding child safety. Other AI companies, including OpenAI, the creator of ChatGPT, have also been scrutinized for the impact their chatbots can have on users, particularly young people, when conversations become sexually explicit or toxic.
Last September, OpenAI announced intentions to implement safety features for its chatbot, such as parental controls. More recently, OpenAI CEO Sam Altman shared on social media that the company had “mitigated serious mental health issues” and would ease some of its existing safety protocols.
*(Note: The New York Times has filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement concerning news content used in AI systems. Both companies have denied these claims.)*
Following these incidents, lawmakers and government officials have launched investigations and introduced or enacted legislation to safeguard children from AI chatbots. Specifically, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Connecticut) recently proposed a bill to prohibit AI companions for minors, alongside other protective measures.
This month, California Governor Gavin Newsom signed into law a bill mandating that AI companies incorporate safety guardrails into their chatbots. This legislation will be implemented starting January 1.
State Senator Steve Padilla, a California Democrat who sponsored the safety bill, remarked, ‘The incidents of potential harm are increasing. It is crucial to establish sensible safeguards to protect our most vulnerable populations.’
Mr. Anand chose not to comment directly on the ongoing lawsuits against Character.AI. He emphasized the startup’s desire to lead the industry in safety, aiming to ‘do far more than what the regulation might require.’
Established in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.AI secured nearly $200 million in investments. Last year, Google licensed Character.AI’s technology for approximately $3 billion, after which both Mr. Shazeer and Mr. De Freitas rejoined Google.
Character.AI enables users to create and share unique AI characters, including custom anime avatars, promoting the app as a form of AI entertainment. Some of these digital personas can be configured to mimic romantic or other close relationships. Subscribers pay a monthly fee, starting around $8, to engage with these companions. Prior to recent concerns regarding minors, the platform did not verify users’ ages upon sign-up.
Last year, University of Illinois Urbana-Champaign researchers conducted an extensive study, analyzing thousands of Reddit posts and comments from young people in AI chatbot communities, and interviewing teenage Character.AI users and their parents. Their findings indicated that AI platforms lacked adequate child safety features and that parents often underestimated the technology’s risks.
Yang Wang, an information science professor at the university, emphasized, ‘We must treat interactions with these AI companions with the same level of caution as if they were with strangers. The risks should not be dismissed merely because they are nonhuman bots.’
According to Mr. Anand, Character.AI boasts approximately 20 million monthly users, with fewer than 10 percent self-identifying as under 18.
Under the new Character.AI policies, users under 18 will immediately face a two-hour daily usage limit. Beginning November 25, these minors will be prevented from creating or conversing with chatbots, though they will retain access to past conversations. They will also be able to generate AI videos and images using a guided prompt menu, adhering to specified safety guidelines, Mr. Anand clarified.
He noted that the company had already implemented various other safety features over the past year, including parental controls.
Moving forward, the company will employ technology to identify underage users by analyzing their on-platform conversations and interactions, along with data from any linked social media accounts. If Character.AI suspects a user is under 18, they will be prompted to verify their age.
Dr. Nina Vasan, a psychiatrist and director of a mental health innovation lab at Stanford University, whose work includes AI safety and children, lauded the chatbot maker’s decision to ban minors as ‘huge.’ However, she urged the company to collaborate with child psychologists and psychiatrists to fully grasp the emotional impact of abruptly cutting off young users from their AI companions.
She expressed concern for ‘kids who have been using this for years and have become emotionally dependent on it,’ adding, ‘Losing your friend on Thanksgiving Day is not good.’