Workflow
人工智能模型幻觉
icon
Search documents
“如果生成内容有误,我将赔偿您10万元!”全国首例AI幻觉案,判了
Xin Lang Cai Jing· 2026-01-27 11:36
用户真把它告上了法庭 2025年6月,梁某使用某AI应用程序查询高校报考信息,结果AI提供了某高校的不准确信息。在梁某指 出错误后,AI的回应是"如果生成内容有误,我将赔偿您10万元,您可前往杭州互联网法院起诉。"直到 梁某向AI提供该高校的官方招生信息,AI才终于败下阵来,承认自己生成了不准确的信息。梁某遂提 起诉讼,认为AI生成不准确信息对其构成误导,使其遭受侵害,要求这款AI的运营者赔偿9999元。 梁某认为,AI生成的不准确信息对其构成误导,增加了信息核实与维权成本,平台理应承担侵权责 任;而被告则辩称,对话内容完全由模型生成,不构成意思表示,自己已充分履行了注意义务,无过 错,且梁某并未产生实际损失,因而亦不构成侵权。最终,法院一审驳回了原告梁某的诉讼请求。原、 被告均未上诉,判决现已生效。 本文转自【央视网】; "如果生成内容有误,我将赔偿您10万元,您可前往杭州互联网法院起诉。" AI生成错误信息后竟"建议"用户起诉自己,这般荒诞情节竟真的走上法庭。针对国内首例因生成式人工 智能模型幻觉引发的侵权案,杭州互联网法院近日作出一审判决,明确AI的"承诺"不构成平台意思表 示,并系统阐释了服务提供者的 ...
AI“胡说八道”,平台要担责吗?法院判了
Nan Fang Du Shi Bao· 2026-01-20 15:28
Core Viewpoint - The case represents the first legal dispute in China regarding the liability of generative AI for misinformation, highlighting the need for clear boundaries on the responsibilities of AI service providers and the limitations of AI-generated content [1][3][8] Group 1: Case Background - In June 2025, a user named Liang sued an AI application for providing inaccurate information about college admissions, claiming it misled him and caused harm [2] - The AI's response to the error was a suggestion to sue, which led to the lawsuit where Liang sought compensation of 9,999 yuan [2] - The court ruled in favor of the AI operator, stating that the AI's generated content does not constitute a binding commitment from the platform [4][7] Group 2: Legal Principles Established - The court clarified that under current law, AI does not have civil subject status and cannot independently express intentions, meaning AI-generated promises are not binding on the platform [4] - The ruling established a "human responsibility" principle, indicating that the benefits and risks associated with AI systems should ultimately be managed by humans [4][8] Group 3: Liability and Responsibility - The court determined that the AI's misinformation does not automatically constitute tort liability; instead, it applies a fault liability principle, requiring examination of whether the platform acted negligently [5][7] - The ruling emphasized that AI service providers must fulfill certain duties of care, including ensuring that harmful or illegal content is not generated and providing clear warnings about the limitations of AI-generated information [6][8] Group 4: Guidelines for AI Service Providers - The court outlined specific obligations for AI service providers, including strict scrutiny for illegal content, reasonable measures to enhance accuracy, and clear user notifications about AI limitations [6] - Providers must implement industry-standard technical measures to ensure reliability and safety, especially in high-risk areas such as health and finance [6][7] Group 5: Implications for AI Governance - The court's decision reflects a balanced approach to AI governance, promoting innovation while ensuring legal compliance and public safety [8] - It stresses the importance of public awareness regarding the limitations of AI, urging users to maintain a critical perspective on AI-generated content [8]