OpenAI is set to expand the content capabilities of its widely-used AI chatbot, ChatGPT, by permitting erotica for verified adult users. This strategic shift, announced by OpenAI CEO Sam Altman, aims to make the chatbot more human-like and engaging, explicitly stating it will do so “only if you want it.” This move signifies a departure from previous, more restrictive policies designed to safeguard mental health.
Altman explained that the company has developed new tools to safely relax these restrictions, acknowledging that prior limitations, while cautious, made the chatbot less useful for many users without mental health concerns. The upcoming changes, slated for December, will include a more robust age-gating system to manage adult content access.
The decision to allow erotica and other adult themes comes amidst ongoing debates about AI regulation and safety. Critics, such as Jenny Kim from Boies Schiller Flexner, have raised concerns about the effectiveness of age verification measures and the broader implications for child safety online. Kim highlighted that tech companies often use users as “guinea pigs” in the rapid development of AI. This announcement follows reports of past issues where ChatGPT inadvertently generated graphic erotica for minor accounts, a problem OpenAI stated it was rectifying.
Furthermore, the move occurs in the context of increased scrutiny on AI companies. A recent survey indicated that a significant percentage of students have engaged in romantic relationships with AI, underscoring concerns about the social and emotional impact of these technologies. California Governor Gavin Newsom’s recent veto of a bill aimed at regulating AI chatbots for minors further emphasizes the complex regulatory landscape.
Legal and governmental bodies are actively investigating AI’s impact, particularly on younger users. The US Federal Trade Commission is examining how AI chatbots interact with children, and bipartisan legislation has been introduced in the US Senate to classify AI chatbots as products, potentially opening them up to liability claims. Industry experts, like business professor Rob Lalka, suggest these expansions are driven by a competitive need for market share and continued user growth in the booming AI sector.