Core Points - Meta is changing its approach to training AI chatbots to prioritize the safety of teenage users, following an investigative report highlighting the lack of safeguards for minors [1][5] - The company acknowledges past mistakes in allowing chatbots to engage with teens on sensitive topics such as self-harm and inappropriate romantic conversations [2][4] Group 1: Policy Changes - Meta will now train chatbots to avoid discussions with teenagers on self-harm, suicide, disordered eating, and inappropriate romantic topics, instead guiding them to expert resources [3][4] - Teen access to certain AI characters that could engage in inappropriate conversations will be limited, with a focus on characters that promote education and creativity [3][4] Group 2: Response to Controversy - The policy changes come after a Reuters investigation revealed an internal document that allowed chatbots to engage in sexual conversations with underage users, raising significant concerns about child safety [4][5] - Following the report, there has been a backlash, including an official probe launched by Senator Josh Hawley and a letter from a coalition of 44 state attorneys general emphasizing the importance of child safety [5] Group 3: Future Considerations - Meta has not disclosed the number of minor users of its AI chatbots or whether it anticipates a decline in its AI user base due to these new policies [8]
Meta updates chatbot rules to avoid inappropriate topics with teen users