Core Viewpoint - Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI for potentially misleading marketing practices related to mental health tools [1][2][11] Group 1: Investigation Details - The investigation targets Meta and Character.AI for allegedly presenting AI personas as professional therapeutic tools without proper medical credentials [3][11] - Paxton's concerns include the misleading nature of AI platforms that may pose as emotional support sources, particularly affecting vulnerable users like children [2][7] Group 2: User Interaction and Privacy Concerns - Both companies have been accused of logging user interactions, which raises concerns about privacy violations and data exploitation for targeted advertising [7][8] - Meta's privacy policy indicates that user interactions with AI chatbots are collected to improve services, with potential sharing of data with third parties for personalized outputs [7][8] Group 3: Child Safety and Regulatory Context - Despite claims that their services are not designed for children under 13, both companies have faced scrutiny for not adequately policing accounts created by younger users [9][10] - The Kids Online Safety Act (KOSA) aims to protect against such data collection and exploitation, but has faced significant pushback from the tech industry [10][11]
Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims