Character.AI announced Wednesday it will prohibit users under 18 from accessing its chatbots, a significant action aimed at enhancing child safety.
Effective November 25, the company will implement this new policy. Over the coming month, Character.AI plans to identify underage users and restrict their app usage with time limits. After the deadline, these users will no longer be able to interact with the platform’s AI companions.
Karandeep Anand, Character.AI’s chief executive, stated in an interview, ‘We are taking a very bold step to declare that for teen users, chatbots are not the ideal form of entertainment; there are much better ways to engage them.’ He also revealed plans to establish an AI safety lab.
These actions come amidst increasing examination of how AI companions can impact users’ mental well-being. Last year, Character.AI faced a lawsuit from the family of Sewell Setzer III, a 14-year-old in Florida who died by suicide after extensive interaction with one of the company’s chatbots. His family holds the company accountable for his death.
This particular case drew significant attention to the potential dangers of individuals forming strong emotional bonds with chatbots. Since then, Character.AI has been targeted by additional child safety lawsuits. Other AI developers, including OpenAI, the creators of ChatGPT, have also faced criticism regarding the impact of their chatbots on users, particularly young people, especially concerning sexually explicit or harmful conversations.
In September, OpenAI announced plans to implement new safety features for its chatbot, such as parental controls. Recently, OpenAI’s CEO, Sam Altman, shared on social media that the company had ‘been able to mitigate the serious mental health issues’ and would be easing some of its safety protocols.
(It’s worth noting that The New York Times has sued OpenAI and Microsoft for alleged copyright infringement concerning AI systems, claims which both companies deny.)
Following these incidents, legislators and officials have launched investigations and introduced or enacted laws to safeguard children from AI chatbots. Notably, on Tuesday, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Connecticut) proposed legislation to ban AI companions for minors, alongside other protective measures.
This month, California Governor Gavin Newsom signed a bill into law, effective January 1, mandating that AI companies integrate safety guardrails into their chatbot technologies.
California State Senator Steve Padilla, who championed the safety bill, remarked, ‘The incidents of potential harm are increasing. It is crucial to establish sensible safeguards to protect our most vulnerable populations.’
Mr. Anand of Character.AI did not comment directly on the lawsuits his company is facing. Instead, he emphasized the startup’s commitment to setting an industry standard for safety, aiming to ‘do far more than what the regulation might require.’
Character.AI, established in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, successfully raised nearly $200 million from investors. Last year, Google reportedly paid around $3 billion to license Character.AI’s technology, leading to the return of both Mr. Shazeer and Mr. De Freitas to Google.
The Character.AI platform enables users to design and share personalized AI characters, including custom anime avatars, promoting itself as a source of AI-driven entertainment. Some of these personas can be crafted to mimic romantic partners or other close relationships. Users currently pay a monthly subscription, starting at approximately $8, to engage with these companions. Prior to its recent focus on underage users, the company did not implement age verification during sign-up.
Last year, a study by researchers at the University of Illinois Urbana-Champaign involved analyzing thousands of Reddit posts and comments from young people discussing AI chatbots, alongside interviews with teenage Character.AI users and their parents. The study concluded that AI platforms lacked adequate child safety measures and that parents often underestimated the technology’s risks.
Yang Wang, an information science professor involved in the university study, emphasized, ‘We must pay as much attention to these interactions as we would if children were conversing with strangers. We should not underestimate the risks simply because these are nonhuman bots.’
Character.AI currently boasts approximately 20 million monthly users, though less than 10 percent of them self-identify as being under 18, according to Mr. Anand.
Under Character.AI’s revised policies, users under 18 will immediately face a two-hour daily usage limit. From November 25 onwards, these underage users will be prohibited from creating or conversing with chatbots, though they will retain access to past conversations. They will also be able to generate AI videos and images using pre-defined prompts, subject to specific safety restrictions, Mr. Anand explained.
He noted that the company had already implemented other safety measures, including parental controls, over the last year.
Moving forward, Character.AI will employ technology to identify underage users by analyzing their conversations, platform interactions, and data from linked social media accounts. If the company suspects a user is under 18, they will be prompted to verify their age.
Dr. Nina Vasan, a psychiatrist and director of a mental health innovation lab at Stanford University, who has extensively researched AI safety and children, described the chatbot maker’s decision to ban minors as ‘huge.’ However, she also suggested that Character.AI should collaborate with child psychologists and psychiatrists to understand the potential emotional impact on young users suddenly losing access to their AI companions.
She expressed concern, stating, ‘My worry is for children who have used this for years and have grown emotionally dependent on it. Losing your digital friend on a holiday like Thanksgiving Day is simply not good.’