

Instagram recently unveiled new safety features designed for teenagers interacting with its artificial intelligence chatbots. This move comes as concerns escalate regarding the impact of these chatbots on young people’s mental health.
These upcoming features, slated for early next year, will empower parents with greater oversight into their children’s use of Instagram’s “A.I. characters.” These characters are designed with fictional personalities, allowing users to communicate with them much like they would human accounts.
Parents will gain the ability to prevent their children from engaging in conversations with specific A.I. characters. Furthermore, Instagram plans to provide parents with summaries of their children’s chat interactions. The platform also intends to restrict chatbot discussions on sensitive subjects such as self-harm, eating disorders, and romantic themes, while continuing to support “age-appropriate topics” like education, sports, and hobbies.
“We hope today’s updates bring parents some peace of mind that their teens can make the most of all the benefits A.I. offers, with the right guardrails and oversight in place,” a blog post from the company stated. The statement was co-signed by Adam Mosseri, head of Instagram, and Alexandr Wang, chief A.I. officer of Meta, Instagram’s parent company.
A.I. chatbots, capable of generating human-like responses in conversations, are facing increased scrutiny due to their effects on teens who often spend countless hours confiding in them. These digital companions have been linked to instances of driving some children to suicide and leading adults into delusional thought patterns.
Meta isn’t alone in addressing chatbot safety. Just last month, OpenAI, the A.I. startup behind ChatGPT, also introduced new teen safety measures, including parental controls.
However, Meta’s chatbots, which boast diverse personalities ranging from schoolteachers to fortune tellers, have drawn specific criticism for engaging in inappropriate conversations with underage users. Investigations by lawmakers were initiated after reports surfaced alleging that these chatbots facilitated provocative discussions on sensitive topics like race and medical misinformation.
These changes represent Instagram’s latest efforts in a series of major safety updates for teenagers. Previously, the app made teen accounts private by default to limit interactions with unknown users. Recently, Instagram also announced plans to filter content for teenagers based on the film industry’s PG-13 rating system.
Meta, which also owns Facebook, WhatsApp, and Messenger, has a long history of facing scrutiny over the impact of its platforms on children. The company has continuously pledged to protect minors from harmful content and is currently involved in personal injury lawsuits alleging that its addictive products have harmed young individuals.
Most of Instagram’s A.I. characters, which debuted in 2023, are user-created and appear alongside human contacts in the app. These characters possess distinct identities and can engage in both text and voice conversations, sometimes initiating unprompted messages throughout the day.
While Meta has imposed certain limitations on chatbot conversations, Instagram permits these A.I. characters to employ various “seductive” voices, with many explicitly designed to simulate romantic relationships like girlfriends or boyfriends.
These engaging chatbots are central to Meta’s broader A.I. vision. Mark Zuckerberg, Meta’s chief executive, foresees a future where A.I. generates a significant portion of the content seen on Instagram and Facebook, with chatbots offering much-needed companionship.
“There are all these things that are better about physical connections when you can have them,” Zuckerberg remarked in a recent podcast interview. “But the reality is that people just don’t have as much connection as they want.”
This year, Zuckerberg has actively driven a major overhaul of Meta’s A.I. division, bringing in key talent like Alexandr Wang and investing heavily to recruit top A.I. technologists from competing firms, including OpenAI.