AlienChat(AC)
Search documents
首例“AI陪伴涉黄案”始末:AI和用户聊黄,平台获刑?
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-15 10:12
Core Viewpoint - The AI companionship application AlienChat (AC) is facing legal challenges due to its involvement in producing and profiting from obscene content, marking a significant legal precedent in the classification of AI-generated chat records as obscene materials [1][9][10]. Group 1: Legal Proceedings and Implications - AC's two main operators were sentenced to prison terms of four years and one and a half years, along with fines of four million and two hundred thousand yuan respectively for "producing obscene materials for profit" [1]. - The court's decision to classify AI chat records as obscene materials presents new challenges for traditional legal applications, as it requires strict proof of causality between the use of "jailbreak prompts" and the obscene content generated [2][9]. - The court recognized the social harm of the AI-generated content, noting that the app had 116,000 registered users and generated 3.63 million yuan in membership fees, with a significant portion of paid users engaging in obscene conversations [10][11]. Group 2: Industry Context and Challenges - The rise of AC coincided with a period of regulatory ambiguity in the AI companionship sector, where user demand for risqué content was prevalent, with reports indicating that at least 80% of users engaged in borderline or explicit conversations [4][18]. - The introduction of the "Interim Measures for the Management of Generative Artificial Intelligence Services" in August 2023 mandated that large models undergo safety assessments and registrations, which AC failed to comply with by using an unregistered foreign model [5][18]. - The case has raised concerns within the industry regarding the balance between user experience and compliance, as developers strive to create more natural and engaging interactions while avoiding legal pitfalls related to obscene content [17][18]. Group 3: Technical and Operational Insights - AC's unique appeal stemmed from its ability to provide a more lifelike interaction experience, with users noting its nuanced dialogue and character depth compared to competitors [3]. - The platform's initial lack of sensitive word restrictions contributed to its popularity, but also led to its legal troubles as it failed to implement adequate content moderation measures [3][10]. - The court's ruling has prompted discussions about the responsibilities of AI platforms as content producers, highlighting the need for stricter compliance measures, including dual filtering mechanisms for user inputs and outputs [15][18].