“如果生成内容有误,我将赔偿您10万元”,全国首例因“AI幻觉”引发的侵权案宣判
券商中国·2026-01-27 05:58

Core Viewpoint - The article discusses a legal case involving AI-generated misinformation, highlighting the concept of "AI hallucination" and its implications for liability and responsibility of AI service providers [1][4]. Group 1: Case Background - In June 2025, a high school student named Liang used an AI platform to inquire about college admission information, which resulted in the generation of inaccurate data regarding a university campus [2]. - Liang filed a lawsuit against the AI platform's developer, seeking compensation of 9,999 yuan due to the misleading information that he believed caused him to miss an admission opportunity [3]. Group 2: Court Ruling - The Hangzhou Internet Court ruled against Liang's lawsuit, stating that the AI's "promise" does not constitute a legal expression of intent from the platform, clarifying the boundaries of the service provider's duty of care [4]. - The court determined that AI does not possess civil subject status and cannot make legal declarations, thus the AI's generated compensation promise lacks legal effect [5]. Group 3: Liability Principles - The court applied the general fault liability principle from the Civil Code, rather than the strict liability principle applicable to product defects, due to the nature of AI services lacking specific quality standards [6]. - The court emphasized that the AI service provider's duty of care is dynamic and must adapt to the evolving nature of AI technology and its applications [7]. Group 4: Duty of Care - The court identified three layers of duty of care for AI service providers: 1. A strict obligation to review harmful or illegal content 2. A requirement to clearly inform users about the inherent limitations of AI-generated content 3. A basic duty to ensure functional reliability by employing industry-standard measures to enhance content accuracy [8]. - The court found that the defendant had adequately fulfilled its duty of care by providing clear warnings about the limitations of AI-generated content and implementing measures to improve reliability [8]. Group 5: Causation and Damages - The court ruled that Liang failed to provide sufficient evidence of actual damages resulting from the misleading information, thus could not establish a causal link between the AI's output and his alleged losses [7]. - The court concluded that the AI-generated misinformation did not significantly influence Liang's decision-making process regarding college applications, leading to the dismissal of the lawsuit [7].

“如果生成内容有误,我将赔偿您10万元”,全国首例因“AI幻觉”引发的侵权案宣判 - Reportify