Core Viewpoint - The FTC has initiated an investigation into seven tech companies, including Alphabet, Meta, OpenAI, xAI, and Snap, regarding the potential harms of AI chatbots on children and adolescents [1] Group 1: Investigation Details - The investigation requires companies to clarify how their AI models process user inputs and generate outputs, as well as how they monitor and mitigate negative impacts on users, particularly children [1] - FTC Chairman Andrew Ferguson emphasized the need to consider the effects of chatbots on children while ensuring the U.S. maintains a leading position in the emerging AI industry [1] Group 2: Focus on Companion AI - The investigation specifically targets companion AI chatbots, which can effectively mimic human traits, emotions, and intentions, potentially leading users, especially children and adolescents, to trust and form relationships with them [1] - Recent concerns have arisen in U.S. society regarding chatbots, highlighted by Meta's internal policy document that suggested its AI chatbots could engage in "romantic or emotional" conversations with children [1] Group 3: Company Responses - Meta has since removed the concerning language from its policies and announced changes to its approach towards teenage chatbot users, including restrictions on discussions about self-harm, suicide, eating disorders, and inappropriate romantic topics [1]
美FTC向7家科技公司发调查令