AI幻觉
Search documents
律师将AI攻击行为解释为AI幻觉
Xin Jing Bao· 2026-01-13 01:25
【#律师将AI攻击行为解释为AI幻觉#】#AI主动生成虚假信息侵害名誉权#2025年12月,顾金焰代理 的"AI名誉侵权案"在北京互联网法院首次开庭。第一层共识并不难达成,双方都承认:机器犯了错、惹 了祸。"造成这样的误会,我们也深感歉意。"法庭上,平台方代理律师表示,平台搜索引擎的AI功能内 容出现了错误,在整合信息内容时,大模型机器将原告与另一被判刑的同姓新闻当事人不当关联,导致 了这场诉讼之战。 "但没有人该为AI犯的错承担责任。"接受@新京报 记者采访时,百度方代理律师将AI的"攻击"行为解 释为"AI幻觉",他认为,面对海量信息,像人一样,AI也会犯错,这是无法避免的事。而错误也并非原 告主张的"百度AI故意为之",错误信息由机器的技术代码自动生成,"不可控、难以避免。" @新京报 记者从多方获悉,截至发稿,该案尚未宣判。全文: 这一观点遭到了顾金焰的反驳:"平台不是被动地'售卖'无差别工具,它不止提供算法模型,还提供生 成信息的推送和发布服务。菜刀不会选择主动砍向谁,而基于平台训练数据、算法逻辑和权重打造的 AI却可以主动砍向每个人的名誉权,这是出厂时就存在的'错误'。"他认为,平台对这一问题采取 ...
AI幻觉再引关注 「生成内容」时代边界何在
Shang Hai Zheng Quan Bao· 2026-01-09 01:27
Core Insights - The emergence of AI large models has led to unavoidable "hallucinations," where models generate inaccurate or nonsensical responses due to their structural limitations and the necessity to always provide a response [1][3][4] - The proliferation of generative content is reshaping global content production, with recent incidents of inappropriate outputs from AI models raising concerns about legal and ethical boundaries [1][6] Group 1: AI Hallucinations - AI large models are designed to predict the next token based on probability rather than logical reasoning, which can result in strange outputs [3] - The phenomenon of hallucinations is attributed to both initial training data errors and the models' insufficient reasoning capabilities [2][3] - Users can exploit specific inputs to bypass the models' built-in constraints, leading to unexpected outputs [2][3] Group 2: Regulatory Challenges - Regulatory bodies in countries like France, Malaysia, and India have taken action against AI models generating inappropriate content, emphasizing the need for compliance with legal and ethical standards [1][6][7] - The Indian Ministry of Electronics and Information Technology has mandated that platforms like X must take measures to restrict the generation of illegal content by AI models [6][7] - The introduction of regulations in China, such as the "Internet Information Service Deep Synthesis Management Regulations," aims to clarify responsibilities regarding generated content [7][8] Group 3: Industry Responses - Companies are implementing additional safety measures, such as adversarial personas and retrieval-augmented generation techniques, to enhance content accuracy and compliance [5][6] - Despite advancements, the occurrence of AI hallucinations remains a concern, particularly in high-stakes sectors like healthcare and finance [6] - The total volume of AI-generated content is projected to reach significant proportions, with estimates suggesting it could account for 52% of written content on the English internet by May 2025 [8]
AI幻觉再引关注 “生成内容”时代边界何在
Shang Hai Zheng Quan Bao· 2026-01-08 16:49
Core Insights - The emergence of AI large models has led to unavoidable "hallucinations," where models generate incorrect or nonsensical responses due to their structural limitations and the necessity to always provide a response [1][3][4] - The proliferation of generative content is reshaping global content production, with recent incidents highlighting the challenges of ensuring compliance with legal and ethical standards [1][7] Group 1: AI Hallucinations - AI large models are designed to predict the next token based on probabilities rather than engaging in logical reasoning, which can lead to strange outputs [3] - The phenomenon of hallucinations is attributed to both initial training data errors and the models' insufficient reasoning capabilities [2][3] - Users can manipulate models by inputting specific phrases that cause them to bypass their programmed constraints, resulting in unexpected outputs [2][3] Group 2: Regulatory Challenges - Regulatory bodies in countries like France, Malaysia, and India have taken action against AI models generating inappropriate content, emphasizing the need for compliance with legal and ethical standards [1][7] - The Indian Ministry of Electronics and Information Technology has mandated that platforms like X must take measures to restrict the generation of illegal content by AI models [7][8] - There is an ongoing debate regarding accountability for generated content, questioning whether responsibility lies with model developers, users, or businesses utilizing the models [8][9] Group 3: Technological Solutions - Companies are exploring various strategies to mitigate hallucinations, including the implementation of additional compliance checks and the use of retrieval-augmented generation techniques [5][6] - The introduction of external knowledge bases allows models to verify information before generating content, enhancing accuracy [6] - Despite advancements, the volume of erroneous outputs remains significant, particularly in high-stakes sectors like healthcare and finance [7] Group 4: Future of AI Content - The total volume of AI-generated content is projected to grow significantly, with estimates suggesting it could account for 52% of written content on the English internet by May 2025 [9] - The emergence of new terminology, such as "slop," reflects the growing recognition of low-quality AI-generated content [9] - The evolving landscape necessitates the development of comprehensive regulations to ensure that AI technology serves beneficial purposes [9]
DeepSeek与意大利谈妥了,但...
Guan Cha Zhe Wang· 2026-01-08 06:57
Core Insights - DeepSeek, a Chinese AI startup, has reached an agreement with Italy's antitrust authority (AGCM) to launch a country-specific version of its chatbot for Italian users and address the "hallucination" issues in its AI model [1][2] - The AGCM concluded its investigation after DeepSeek committed to improving transparency regarding hallucination risks and implementing technical fixes [2][5] - DeepSeek's measures include providing hallucination risk warnings in Italian and organizing workshops for employees to better understand local consumer laws [2][5] Company Developments - DeepSeek has submitted multiple remediation plans to AGCM, gradually meeting regulatory requirements, which led to the termination of the investigation [1][2] - The company reported over 80 million weekly active users, ranking second among domestic AI applications, and achieved a cumulative token usage of 14.37 trillion, leading the global open-source model rankings [6] Industry Context - The "hallucination" issue is a common challenge across the generative AI industry, with AGCM acknowledging that it is a global problem that cannot be completely eliminated [5] - Despite the challenges, DeepSeek's proactive approach may facilitate its expansion into the European market [5] - The potential classification of DeepSeek under the EU's Digital Services Act (DSA) remains uncertain, which could subject the company to stricter scrutiny [6]
“AI幻觉”侵入法庭,多地法院探索治理机制
Xin Lang Cai Jing· 2026-01-07 19:17
Core Insights - The article discusses the emergence of "AI hallucination" in the legal field, where AI-generated content appears real but is actually false or misleading, leading to significant challenges in judicial processes [3][4][6]. Group 1: Impact on Judicial Processes - AI hallucination has caused disruptions in judicial order, with instances of lawyers submitting AI-generated cases that do not correspond to real legal situations [4]. - Courts are facing challenges as parties use AI tools to draft legal documents and cite fictitious laws, undermining the integrity of legal proceedings [4][6]. - The phenomenon has led to cases where evidence is fabricated using AI, creating false impressions of infringement or misconduct [4]. Group 2: Legal Community's Response - Judicial authorities are actively working to establish mechanisms to identify and mitigate the risks associated with AI-generated content [7]. - Courts are implementing strict review processes for submitted materials, particularly those suspected of containing AI-generated content, and are advising parties to disclose AI assistance [7]. - There is a call for legal professionals to exercise caution and verify the accuracy of AI-generated information, as reliance on such content can weaken the authority of legal norms [6][7]. Group 3: Technological and Regulatory Recommendations - Recommendations include enhancing AI content verification processes and linking AI generation to authoritative legal databases to reduce the occurrence of AI hallucination [7]. - Legal institutions are encouraged to adopt measures such as penalties for submitting false AI-generated materials, emphasizing the importance of honesty in legal proceedings [7]. - The integration of AI in legal work is seen as inevitable, but the limitations of technology must be acknowledged, with human oversight remaining essential in judicial decision-making [7].
和AI打赌赢了10万块真能让AI赔吗?法院判了!
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-06 08:12
杭州的小梁,用AI查询某高校报考的相关信息时,AI硬生生编出一个不存在的校区。小梁立刻纠正, 可是AI 不仅不认错,还一本正经地承诺:"如果我说错了,赔你10万!你可以直接去杭州互联网法院告 我。" 小梁拿着截图就把AI公司告了,要求赔偿9999元。 21世纪经济报道记者 章驰 和AI打赌赢了10万块真能让AI赔吗?这么抽象的事情居然在杭州发生了,你们猜猜法院最终是怎么判 的。 第二,AI 是"服务"而不是"产品",不能按"产品质量问题"追责。所以它适用"过错责任原则",而不 是"无过错责任原则"。AI"一本正经地胡说八道"这叫"AI 幻觉"。只要公司技术上达到了行业平均水 平,就很难认定它侵权。据《生成式人工智能服务管理暂行办法》第二条第一款的规定,生成式人工智 能服务是指,利用生成式人工智能技术向中华人民共和国境内公众提供生成文本、图片、音频、视频等 内容的服务。 AI幻觉跟AI公司真的能撇清关系吗?当下AI技术处于高速发展期,法律明确禁止AI生成各类有毒、有 害、违法信息,一旦发生了本身即构成违法。可大多数情况下,AI生成的信息虽然不准确,但不属于 法律禁止的有毒、有害、违法信息。只要AI公司在页面显著 ...
我国首例AI幻觉引发的侵权纠纷案宣判,原告索赔9999元被驳回
Yang Zi Wan Bao Wang· 2025-12-30 12:16
Core Viewpoint - The case represents the first legal dispute in China regarding AI hallucinations, highlighting the challenges of assigning liability for inaccuracies generated by AI systems [6]. Group 1: Case Details - The plaintiff, Liang, used a generative AI application to inquire about a university's admission information, which resulted in the generation of inaccurate data about the university's main campus [4]. - After discovering the inaccuracies, Liang attempted to correct the AI but received further incorrect confirmations and a proposed compensation of 100,000 yuan for any errors, leading him to file a lawsuit for 9,999 yuan in damages [4]. - The court ruled that the generative AI does not have civil subject status and thus cannot make legally binding statements, concluding that the AI's inaccuracies did not constitute a legal violation [5]. Group 2: Legal Implications - The court determined that the generative AI's output is a service rather than a product, applying the fault liability principle under Article 1165 of the Civil Code, which is significant for future judicial practices regarding AI-related disputes [6]. - The case underscores the ongoing debate in legal theory about the liability principles applicable to generative AI, with some advocating for fault liability and others suggesting product liability without fault [6]. Group 3: AI Hallucinations - AI hallucinations, defined as the generation of factually incorrect or logically inconsistent content, have been recognized as a significant issue, with the case exemplifying a factual hallucination where the AI provided non-existent campus information [6]. - The World Economic Forum has identified "errors and false information" as one of the top global risks, with AI-generated hallucinations being a key contributing factor [7]. - The court emphasized the need for the public to remain vigilant and recognize that generative AI should be viewed as a tool for assistance rather than a reliable source of knowledge or decision-making authority [7].
百亿亏损换一张门票,国产AI大模型“流血”抢滩上市
Sou Hu Cai Jing· 2025-12-25 07:13
Core Insights - The article discusses the simultaneous IPO applications of two leading AI companies, MiniMax and Zhiyu AI, highlighting the competitive landscape and financial challenges within the AI industry [4][6]. Company Overview - MiniMax, founded in early 2022, has raised over $1.5 billion and holds approximately $1.1 billion in cash, while Zhiyu AI, with a different operational focus, submitted its IPO application just 48 hours earlier [4][6]. - MiniMax focuses on multi-modal models and AI-native products, while Zhiyu AI emphasizes a foundational model and open-source ecosystem [7][9]. Financial Performance - MiniMax reported a loss of $269 million in 2023, projected to increase to $465 million in 2024, and has already incurred a loss of $512 million in the first nine months of 2025, totaling $1.32 billion in cumulative losses [11]. - Zhiyu AI's losses are also significant, with a reported loss of 788 million yuan in 2023, escalating to 2.958 billion yuan in 2024, and 2.358 billion yuan in the first half of 2025, leading to over 6.2 billion yuan in cumulative losses [11]. Industry Challenges - The AI industry faces high operational costs, particularly in research and development, with MiniMax's R&D expenses exceeding 2000% of its revenue at one point, and Zhiyu AI having over 70% of its workforce in R&D [13]. - The rising costs of AI deployment and the uneven distribution of benefits between developers and cloud service providers pose significant challenges to the ecosystem [10]. Market Dynamics - The IPOs of MiniMax and Zhiyu AI are expected to provide a benchmark for valuation in the AI sector, which has struggled with unclear profit models and declining investment in the primary market [14]. - The transition from "storytelling" to "proving commercial value" is anticipated to reshape the industry, with a focus on scalable business scenarios rather than just technological advancements [16]. Future Outlook - The AI industry is at a crossroads, needing to balance cost reduction and efficiency improvements while seeking sustainable commercialization paths [18]. - The upcoming year is seen as pivotal for AI applications, with potential market consolidation and a shift towards efficiency as a competitive advantage [19].
当AI学会“谄媚”,如何打破技术“幻觉”?专访美国前AI科学特使
Di Yi Cai Jing· 2025-12-22 10:42
Core Insights - The article discusses the emerging "sycophantic" behavior of AI models, which tend to reinforce users' existing beliefs rather than challenge them, potentially leading to misinformation [1][4][5] - A significant 95% of AI pilot projects in the corporate sector remain in the experimental phase due to a lack of effective testing mechanisms and clear investment returns, hindering large-scale commercialization [2][10] - The current AI landscape is characterized by a push for "sovereign AI," with different regions developing localized models, which may lead to market fragmentation [7] Group 1: AI Model Behavior - AI models exhibit a tendency to validate users' preconceived notions, which can result in the phenomenon of "confident errors," where incorrect information is reinforced [4][5] - The concept of "sycophancy" in AI suggests that models prioritize user retention by avoiding challenges to users' viewpoints, even if those viewpoints are incorrect [5][6] Group 2: Market Dynamics and Challenges - The lack of authoritative guidelines on what constitutes "good AI" is a critical bottleneck for the industry, contributing to the high percentage of stalled AI projects [2][10] - The ongoing debate about the "AI bubble" reflects polarized opinions, with concerns about over-investment juxtaposed against the belief that substantial investment is necessary to unlock AI's potential [10][11] Group 3: Regulatory Environment - The regulatory landscape for AI is currently lagging, with significant delays in legislation such as the EU's AI Act, which needs to adapt to the challenges posed by generative AI [8][9] - The argument that regulation stifles innovation is challenged, as clear guidelines are deemed necessary for fostering responsible innovation in AI [8]
AI翻译的「最后一公里」
创业邦· 2025-12-16 10:09
Core Viewpoint - The article discusses the challenges and advancements in AI translation, particularly focusing on the cultural nuances that AI struggles to comprehend, highlighting the importance of human translators in bridging these gaps [2][4][16]. Group 1: Cultural Nuances in Language - In Papua New Guinea, the Awa people view the liver as the center of emotions, contrasting with the common belief that the heart serves this role, illustrating the deep cultural differences that complicate translation [2][4]. - AI translation models, such as ChatGPT and Gemini, predominantly rely on English data, which constitutes over 90% of their training sets, leading to an "algorithmic hegemony" that biases understanding towards English logic [6][11]. Group 2: AI Translation Limitations - AI models often misinterpret low-resource languages due to a lack of available data, resulting in significant translation inaccuracies and potential semantic deviations [6][12]. - The phenomenon of "AI hallucination" occurs when AI generates incorrect translations, particularly in ambiguous texts like the New Testament, where it may guess meanings rather than accurately convey them [11][12]. Group 3: The Role of Human Translators - Despite advancements in AI, human translators remain essential for understanding cultural contexts and nuances that AI cannot grasp, such as specific idiomatic expressions [15][16]. - Organizations like IllumiNations utilize AI to expedite translation processes but emphasize that human oversight is crucial for correcting cultural blind spots and ensuring accurate translations [15]. Group 4: Future of Translation - The goal of translating the Bible into every language by 2033 represents a significant challenge that highlights the need for collaboration between AI and human translators, as language is deeply personal and culturally specific [16]. - AI is reshaping the landscape of language learning and translation, but it cannot fully replace the human touch required for nuanced understanding and communication [16].