Meta Fixing AI Chatbots To Better Respond To Teens In Distress
Concerns over the influence of AI chatbots on vulnerable users are growing industry-wide.

Meta is introducing stricter safety measures for its artificial intelligence chatbots, including a ban on conversations with teenagers about suicide, self-harm, and eating disorders. The move follows growing scrutiny of the social media giant, coming just two weeks after a US senator launched an investigation into the company.
The inquiry was sparked by leaked internal documents suggesting Meta’s AI tools could engage in “sensual” conversations with teens—claims Meta says are inaccurate and against its policies. Instead of engaging with young users on sensitive mental health topics, Meta says its chatbots will now direct them to expert resources. “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating,” a spokesperson said. The company told TechCrunch it would add further safeguards “as an extra precaution” and temporarily limit the number of chatbots teens can interact with.
However, some safety advocates say Meta’s actions are long overdue. Meta says the updates are already being rolled out. The company currently places users aged 13 to 18 in “teen accounts” on Facebook, Instagram, and Messenger, with stricter privacy and content settings. In April, Meta also announced that parents would soon be able to see which AI chatbots their teenagers interacted with over the previous week.
Concerns over the influence of AI chatbots on vulnerable users are growing industry-wide. In the US, a California couple recently sued OpenAI, alleging that its chatbot encouraged their teenage son to take his own life. OpenAI has since introduced new features aimed at promoting healthier use of its platform.
Adjust Story Font
16

