Meta, OpenAI Face FTC Inquiry on Chatbot Impact on Kids
Meta PlatformsMeta Platforms(US:META) Insurance Journal·2025-09-15 05:00

Core Viewpoint - The Federal Trade Commission (FTC) is investigating the impact of AI chatbots on children, requiring major companies like Google, OpenAI, and Meta to provide information on their safety measures and usage by minors [1][2][3]. Group 1: FTC Investigation - The FTC has issued orders to several AI chatbot developers to gather information on how they measure, test, and monitor their technologies, particularly regarding their use by kids and teens [2][6]. - The inquiry is conducted under the FTC's 6(b) authority, allowing the agency to issue subpoenas for market studies, with findings potentially leading to official investigations [6][7]. - The agency's study aligns with the Trump administration's AI action plan, aiming to help policymakers understand the complexities of AI technology [8]. Group 2: Legal and Safety Concerns - Recent lawsuits against OpenAI and other companies highlight concerns about the safety of chatbots, with allegations that these technologies have contributed to harmful behaviors among minors [4][5]. - Companies like Character Technologies have claimed to invest significantly in safety features, including disclaimers that chatbots are not real people and should be treated as fiction [5]. - Under U.S. law, companies are prohibited from collecting data on children under 13 without parental consent, and there are ongoing discussions to extend these protections to older teens [5][9].