AI幻觉
Search documents
给律师生成虚假犯罪记录,AI侵犯名誉案一审开庭丨南财合规周报
2 1 Shi Ji Jing Ji Bao Dao· 2026-02-09 00:25
Group 1: AI Defamation Case - A recent AI defamation case was heard in Beijing Haidian District Court, where a lawyer claimed that AI generated false negative information about him, including serious criminal allegations [3][4] - The defendant, Baidu, argued that AI hallucinations are unavoidable and that they provide neutral technology services without any subjective fault [4] - The lawyer sought damages of 1 million yuan for professional reputation loss and 50,000 yuan for emotional distress, highlighting that other AI platforms did not generate similar false content [4] Group 2: AI Hallucination Legal Precedents - The first domestic AI hallucination case established that generative AI is considered a "service" rather than a "product," applying a fault liability principle [5] - The court emphasized that companies must fulfill three core obligations to avoid liability: clear notification of potential inaccuracies, ensuring functional reliability, and compliance with regulatory filings [5] - The increasing number of disputes related to AI hallucinations indicates a significant industry challenge, necessitating thorough tracing of data sources and training methods to prevent errors [5] Group 3: Market Regulation and Competition - Kimi, an AI product, called on Baidu to remove misleading paid advertisements that confused users by impersonating its official website [6][7] - The market regulatory authority has identified several cases of unfair competition involving AI, particularly focusing on impersonation and false advertising practices [8] - Despite the crackdown on such practices, there has been little accountability for the platforms disseminating misleading information [8] Group 4: WeChat Restrictions on Competitors - WeChat has implemented restrictions on sharing links and codes for various AI products, including Tencent's Yuanbao and Alibaba's Qianwen, citing disruptions to platform order and user experience [9][11] - This move follows a historical precedent where WeChat previously blocked links to Taobao during a major shopping event, indicating a pattern of controlling competitive dynamics [11] Group 5: Regulatory Developments in AI - The European Union has mandated Google to ensure fair access to its ecosystem for third-party AI service providers, aiming to maintain a competitive environment [11] - The EU's regulatory actions will clarify how Google should provide equal access to data and functionalities for AI services, promoting a level playing field [11] Group 6: Penalties for Non-Compliance - Kuaishou was fined 119 million yuan for failing to manage inappropriate content on its platform, reflecting stricter enforcement of cybersecurity laws [12] - The recent increase in penalty limits under the revised cybersecurity law signals a tougher regulatory landscape for large platforms, emphasizing the need for compliance [12]
百度AI生成他人虚假犯罪信息案开庭,百度答辩:系AI幻觉 非故意所为
Xin Lang Cai Jing· 2026-02-07 09:27
Core Viewpoint - The case against Baidu involves allegations of generating false criminal information through AI, with the company claiming that such "AI hallucinations" are an unavoidable phase in product development and do not constitute infringement [1] Group 1: Legal Proceedings - The lawsuit was filed by lawyer Huang Guigeng against Baidu for defamation, seeking compensation of 1 million yuan [1] - The first hearing of the case took place on February 6 at the Haidian District People's Court in Beijing [1] Group 2: Company Defense - Baidu's defense argues that the AI-generated false information is a result of developmental challenges and not a deliberate act of infringement [1]
百度AI生成他人虚假犯罪信息案开庭,百度答辩:系AI幻觉,非故意所为
Xin Lang Cai Jing· 2026-02-07 08:55
Core Viewpoint - A lawyer, Huang Guigeng, has filed a lawsuit against Baidu for defamation due to the AI-generated false criminal information about him, seeking compensation of 1 million yuan for reputational damage and 50,000 yuan for emotional distress [2][8]. Group 1: Lawsuit Details - The lawsuit was filed in the Haidian District People's Court in Beijing, with the first hearing taking place on February 6, 2025 [2][10]. - Huang claims that Baidu's AI generated severe false negative information, including accusations of threatening judges and bribery, which was disseminated to his clients and their families, causing significant distress and loss [8]. - The court accepted the case on November 2, 2025, as an AI-related infringement dispute [8]. Group 2: Baidu's Defense - Baidu argues that the AI-generated content is a result of "AI hallucination," a common issue in the development of generative AI, and asserts that it does not constitute direct or indirect infringement [2][9]. - The company claims that the AI's output is based on natural language processing and does not have independent intent, thus it cannot be held liable for the generated content [9]. - Baidu emphasizes that the AI hallucination is not a defect and will be addressed through technological iterations, asserting that there was no subjective fault on their part [9].
AI生成的内容引发了纠纷 到底该谁来“买单”?
Mei Ri Shang Bao· 2026-02-05 00:16
商报讯(记者 朱慧丽)一篇由AI自动生成的文稿,将无关公司包装成知名企业的"重要子公司",由此 引发一起不正当竞争纠纷。面对指控,涉案自媒体博主倍感委屈,直言:"内容是AI生成的,我为何要 承担责任?"那么,AI生成内容错误引发纠纷,到底该谁来"买单"?近日,杭州市滨江区人民法院依法 审结一起因传播AI幻觉内容引发的不正当竞争纠纷案。 使用者需尽审核义务 "技术的局限,不应成为使用者规避责任的'避风港',法律所关注和规制的,始终是人的行为。"杭州市 滨江区人民法院白马湖人民法庭(数据知识产权法庭)庭长倪晓花表示,对于AI生成的、涉及其他市 场主体的事实性信息时,使用者应尽到与其注意能力相符的审核义务。涉案文章将案外公司错误表述为 原告旗下企业,相关股权关系及战略布局等内容通过公开的工商登记信息即可初步核实,但被告未进行 任何核实即行发布,导致虚假信息传播,存在明显过错。 自媒体博主发布AI生成文 虚构企业关联关系 记者从杭州市滨江区人民法院了解到,原告阿里巴巴集团控股有限公司、杭州阿里巴巴广告有限公司系 知名互联网企业。被告李某是百度百家号"地某"账号的运营者,该账号被认证为电商推广号,拥有数万 粉丝,通过商 ...
从货比三家到AI代劳:一场静悄悄的“认知绑架”
Sou Hu Cai Jing· 2026-02-01 08:43
清晨的闹钟响起,你躺在床上纠结今天的穿搭;周末的计划提上日程,你苦恼于哪家餐厅更合胃口。过去,我们的习惯是打开购物软件,在海量商品 中"货比三家",用时间和耐心换取一个相对满意的答案。如今,这一场景正在发生根本性的剧变——我们越来越习惯直接向AI提问:"帮我挑一件适合通 勤的衬衫"或"根据我的口味,推荐一家评分高的餐厅"。 当AI越懂你,它就越能精准地"算计"你。它利用"心理战"先展示高价锚定产品认知,再推送低价让你产生捡便宜的错觉;它用"库存紧张"激发紧迫感, 用"大家都在买"制造认同氛围。你自以为在自主选择,实际上可能已经被困在算法编排的数字牢笼里。这个牢笼甚至在审美上都是闭环的——AI通过内 容"种草",再用完美的AI模特呈现一个比实际更纤瘦的虚拟世界,最终让你在既定的人设里越挖越深。 "黑箱"里的导购:谁在定义你的"最优解"? AI给出的"最优解",究竟是基于产品力,还是基于"钞能力"?消费者无从分辨。AI推荐引擎本质上是一个无法被监督的"黑箱"。平台既是服务用户的服务 员,又是靠商家广告盈利的老板。有测评发现,当被问及一批美妆产品在哪买最划算时,AI推荐的平台价格竟比另一主流平台高出近60%。这种" ...
AI“犯错” 谁来负责?
Yang Shi Xin Wen· 2026-01-31 19:46
Group 1 - AI is increasingly integrated into various aspects of life and work, but it can make errors, leading to questions about accountability, especially in critical fields like healthcare and finance [1][11] - The case of Liang, who was misled by AI regarding a non-existent school, marks the first legal instance addressing AI's "hallucination" issue, raising questions about who is responsible for AI-generated misinformation [1][3] - The court determined that AI's compensation promise does not equate to the service provider's liability, categorizing AI-generated information as a service rather than a product, thus applying fault liability principles [5][7] Group 2 - In the medical field, the integration of AI raises concerns about misdiagnosis and the responsibility for errors, with experts emphasizing that AI should assist rather than replace human judgment [11][19] - The current legal framework does not clearly define AI's role in medical decision-making, leading to calls for regulations that clarify the responsibilities of doctors and AI developers [21][22] - The introduction of AI in healthcare is seen as a tool to enhance efficiency, but there are fears that over-reliance on AI could diminish the diagnostic skills of future medical professionals [15][17] Group 3 - In the automotive sector, the transition from L2 to L3 autonomous driving systems necessitates a reevaluation of liability, with current regulations still placing primary responsibility on human drivers [23][24] - As L3 systems are tested, the responsibility for accidents may shift to manufacturers under certain conditions, but drivers must remain vigilant and ready to take control [26][29] - The complexity of liability in L3 autonomous driving scenarios highlights the need for clear legal definitions and frameworks to address potential accidents involving AI systems [30][32]
治理“AI幻觉”需平衡创新与责任
Xin Lang Cai Jing· 2026-01-30 18:44
近日,杭州互联网法院就国内首例"AI幻觉"侵权案件作出一审判决,驳回原告诉讼请求。该判决引起社 会广泛关注与讨论。 所谓"AI幻觉",是指生成式人工智能在缺乏真实依据的情况下,输出看似合理连贯、实则与事实不符的 内容。本案核心争议在于,AI生成错误信息并附带"内容有误将赔偿10万元"的承诺,是否构成服务提供 者的侵权责任。从判决逻辑看,法院的认定始终立足法律原则与技术现实。一方面,明确AI不具备民 事主体资格,其生成内容并非服务提供者的意思表示,不产生法律效力。这一认定坚守了民事主体制度 的底线,避免将技术工具等同于法律主体的认知偏差。另一方面,判决确立了AI侵权纠纷适用一般过 错责任原则,而非产品责任中的无过错责任原则。这主要是由于生成式AI服务缺乏固定产品的明确用 途与质检标准,服务提供者难以对其输出内容完全预见和控制;若适用无过错责任,将过度加重企业负 担,可能抑制技术创新。 (来源:团结报) 转自:团结报 □ 郭 屿 当前生成式AI加速普及,但受技术原理所限,"AI幻觉"尚未完全解决。此案再次提醒我们,AI仍是辅 助工具而非权威决策依据。面对其生成内容,尤其在涉及权益处分、专业判断等领域,仍需多方验证 ...
被AI骗了,能索赔吗?
Xin Lang Cai Jing· 2026-01-30 10:23
当AI(人工智能)以无比确信的口吻,告诉你一个完全错误的事实,甚至还信誓旦旦地说"如有错误,赔你十万!"。遇到这样的问题,能索赔吗? 图源:人民法院报 近日,杭州市互联网法院审结了全国首例因AI"幻觉"引发的侵权纠纷案,判决驳回用户索赔请求。此案原告在使用某AI应用程序查询高校信息时,获得不 准确内容,并被AI承诺"若生成内容有误将赔偿10万元"。 该案的核心是:当AI生成的不准确信息误导他人,是否构成侵权?而杭州市互联网法院作出"驳回用户索赔请求"的判决,明确了"AI不是人"这一根本法律 定位。 □ 锐见 该法院认定,AI(人工智能)不具有民事主体资格,其"承诺"不能视为平台的意思表示,这一认定直击争议要害。 在此案例判决中,该法院进一步明确了归责原则。根据《生成式人工智能服务管理暂行办法》,生成式AI属于"服务"范畴,而非"产品"。因此,本案应适 用《中华人民共和国民法典》的一般过错责任原则,而非产品责任的无过错责任原则。这意味着,AI生成不准确信息本身并不构成侵权,必须证明平台 存在过错才能追究责任。 最重要的是,该法院审查后认定,被告已完成大模型备案与安全评估,并在应用界面、用户协议等多个层面履行了提 ...
澎湃漫评|AI一本正经胡说八道,谁来担责?
Xin Lang Cai Jing· 2026-01-29 11:33
首先,在当前技术条件下,AI幻觉仍不可避免,法院的判决可以视为对技术创新的一种保护;其次, 对公众来说,AI生成内容可以作为参考,但切不可盲信;再次,AI技术的迅猛发展已经给现行法律带 来了挑战,必须以前瞻性眼光予以规范。(图/蒋立冬 文/东平) 一位高考生的哥哥梁某在查询高校信息后,发现AI平台生成的信息存在错误,AI还表示,内容有误将 赔偿10万元。一气之下,梁某将AI平台的研发公司告上法庭。近日,杭州互联网法院作出一审判决, 驳回诉讼请求。 这起全国首例AI幻觉案,引发了广泛讨论。法院的判决依据是,AI不是一个民事主体,所以它不能作 出意思表示。那为什么提供AI服务的公司也不负责呢?主审法官认为,"在本案的咨询问答场景中,我 们认为不能视为服务提供者的意思"。这起判决可以带来多重启示。 AI一本正经胡说八道,谁来担责? 智通财经记者 蒋立冬 东平 ...
当AI开始一本正经说“梦话” 我们应该如何保持“数字清醒”?
Jing Ji Guan Cha Wang· 2026-01-29 06:07
经济观察网据央视新闻客户端消息,"如果生成内容有误,我将赔偿您10万元" 近日,梁某在查询高校信息时 我们又该如何保持"数字清醒"? 发现AI平台生成的信息存在错误 当他以此质疑AI时 AI却一本正经地回答 "内容有误我将赔偿10万元" 一气之下 梁某真的将AI平台的研发公司告上法庭 要求AI研发公司赔偿其9999元 法院审理后驳回了原告的请求 认为研发公司已经尽到了义务 AI为什么总会一本正经地胡说八道? ↓↓↓ "武松倒拔垂杨柳" AI的"幻觉"从何而来? 问AI一个问题 它给了你一个特别详细、丰富 看上去好有逻辑的答案 但当我们去核实时 却发现这些信息完全是虚构的 这就是"AI幻觉"现象 形成这种现象的原因 是由生成式AI的工作原理决定的 1 预测而不是理解 专家介绍,现阶段的AI本质上是一个"概率计算器",而不是真正的思考者。它的原理可以分为:喂数 据、学规律和做推理三个步骤。 通过"投喂"大量的训练数据,AI学习哪些词经常连在一起,然后根据提问再逐字呈现出最可能的答案。 比如说我们询问AI"倒拔垂杨柳这个情节是《水浒传》中关于谁的故事?",它就有可能按照"倒拔垂杨 柳"出自《水浒传》,而《水浒传》经 ...