生成式人工智能侵权责任归责原则
Search documents
我国首例AI幻觉引发的侵权纠纷案宣判,原告索赔9999元被驳回
Yang Zi Wan Bao Wang· 2025-12-30 12:16
Core Viewpoint - The case represents the first legal dispute in China regarding AI hallucinations, highlighting the challenges of assigning liability for inaccuracies generated by AI systems [6]. Group 1: Case Details - The plaintiff, Liang, used a generative AI application to inquire about a university's admission information, which resulted in the generation of inaccurate data about the university's main campus [4]. - After discovering the inaccuracies, Liang attempted to correct the AI but received further incorrect confirmations and a proposed compensation of 100,000 yuan for any errors, leading him to file a lawsuit for 9,999 yuan in damages [4]. - The court ruled that the generative AI does not have civil subject status and thus cannot make legally binding statements, concluding that the AI's inaccuracies did not constitute a legal violation [5]. Group 2: Legal Implications - The court determined that the generative AI's output is a service rather than a product, applying the fault liability principle under Article 1165 of the Civil Code, which is significant for future judicial practices regarding AI-related disputes [6]. - The case underscores the ongoing debate in legal theory about the liability principles applicable to generative AI, with some advocating for fault liability and others suggesting product liability without fault [6]. Group 3: AI Hallucinations - AI hallucinations, defined as the generation of factually incorrect or logically inconsistent content, have been recognized as a significant issue, with the case exemplifying a factual hallucination where the AI provided non-existent campus information [6]. - The World Economic Forum has identified "errors and false information" as one of the top global risks, with AI-generated hallucinations being a key contributing factor [7]. - The court emphasized the need for the public to remain vigilant and recognize that generative AI should be viewed as a tool for assistance rather than a reliable source of knowledge or decision-making authority [7].