Core Viewpoint - The case marks the first legal dispute in China regarding the liability of generative AI for misinformation, with the court ruling that AI's "promises" do not constitute a platform's intention and emphasizing the boundaries of service providers' duty of care [1][2] Group 1: Case Details - The plaintiff, Liang, used an AI application to inquire about college admission information, which provided inaccurate data. After pointing out the error, the AI suggested that Liang could sue it for compensation of 100,000 yuan [1] - Liang filed a lawsuit claiming that the AI's misinformation misled him and increased his costs for information verification and rights protection, seeking 9,999 yuan in damages [1] - The defendant argued that the conversation was entirely generated by the model and did not constitute an intention statement, asserting that they fulfilled their duty of care and Liang did not suffer actual damages [1] Group 2: Court Findings - The court found that the defendant had completed the necessary model registration and safety assessment, fulfilling their obligation to inform users through various channels, and Liang failed to prove actual damage or a causal relationship [2] - The ruling clarified that under current law, AI does not possess civil subject status and cannot independently make intention statements or be viewed as the platform's "agent" [2] - Even though the AI made a "compensation promise," it does not bind the platform to any contractual obligations [2] Group 3: Governance Principles - The court emphasized that governance of generative AI should balance development and safety, promoting innovation while protecting rights [3] - Service providers must conduct strict result-oriented reviews and take reasonable measures to enhance the accuracy and reliability of generated content, as current laws do not require "zero errors" [3] - Platforms are required to implement clear measures to inform users of AI's limitations and must adopt industry-standard technical measures to improve accuracy [3] Group 4: Public Awareness - The court advised the public to maintain vigilance and rational understanding when interacting with generative AI, which should be viewed as a "text generation tool" rather than a reliable "knowledge authority" [4] - Blind trust in AI-generated content can amplify the risks associated with misinformation, and rational use is essential for AI to enhance personal capabilities rather than create misleading information [4]
AI提供信息有误,用户诉平台侵权
Xin Lang Cai Jing·2026-01-28 19:57