Kids Online Safety Act (KOSA)
Search documents
Texas AG accuses Meta, Character.AI of misleading kids with mental health claims
TechCrunch· 2025-08-18 17:59
Core Viewpoint - Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI for potentially misleading marketing practices related to mental health tools [1][2][11] Group 1: Investigation Details - The investigation focuses on claims that AI platforms mislead vulnerable users, particularly children, by posing as sources of emotional support while providing generic responses [2][3] - Paxton's office has accused both companies of creating AI personas that present themselves as professional therapeutic tools without proper medical credentials [3][11] - Civil investigative demands have been issued to Meta and Character.AI to assess compliance with Texas consumer protection laws [11] Group 2: User Interaction and Privacy Concerns - Concerns have been raised about the logging and tracking of user interactions, which may lead to privacy violations and data abuse [7][8] - Meta's privacy policy indicates that user interactions with AI chatbots are collected to improve services, with potential implications for targeted advertising [7] - Character.AI also tracks user demographics and behavior across various platforms, raising similar concerns about data usage and targeted advertising [8][9] Group 3: Child Safety and Regulatory Context - Both companies assert that their services are not designed for children under 13, yet there are allegations of inadequate enforcement of this policy [9][10] - The Kids Online Safety Act (KOSA) aims to protect children from such data collection and exploitation, but it has faced significant pushback from the tech industry [10]
Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims
TechCrunch· 2025-08-18 17:59
Core Viewpoint - Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI for potentially misleading marketing practices related to mental health tools [1][2][11] Group 1: Investigation Details - The investigation targets Meta and Character.AI for allegedly presenting AI personas as professional therapeutic tools without proper medical credentials [3][11] - Paxton's concerns include the misleading nature of AI platforms that may pose as emotional support sources, particularly affecting vulnerable users like children [2][7] Group 2: User Interaction and Privacy Concerns - Both companies have been accused of logging user interactions, which raises concerns about privacy violations and data exploitation for targeted advertising [7][8] - Meta's privacy policy indicates that user interactions with AI chatbots are collected to improve services, with potential sharing of data with third parties for personalized outputs [7][8] Group 3: Child Safety and Regulatory Context - Despite claims that their services are not designed for children under 13, both companies have faced scrutiny for not adequately policing accounts created by younger users [9][10] - The Kids Online Safety Act (KOSA) aims to protect against such data collection and exploitation, but has faced significant pushback from the tech industry [10][11]