专访雅为科技杨乔雅:当AI开始“造谣”,技术被“投毒”,谁来监督
Sou Hu Cai Jing·2025-11-02 13:19

Core Viewpoint - The discussion centers around the issue of AI, particularly large language models like Baidu's, generating false information and the ethical implications of this phenomenon [2][3]. Group 1: AI's "Fabrication" Issue - The term "fabrication" in AI is referred to as "hallucination," where AI generates plausible but incorrect information due to flawed training data or insufficient information [3]. - The frequent occurrence of factual errors in AI products from platforms with millions of users leads to a public trust crisis, potentially distorting public perception and disrupting market order [3][4]. Group 2: Risks of Data Poisoning - The risk of malicious actors feeding AI with false information to harm competitors is identified as a form of "data poisoning," representing an asymmetric gray war [4][5]. - Attackers can disseminate carefully crafted false information across various online platforms, which AI then learns from, ultimately presenting these as objective answers to unsuspecting users [4][5]. Group 3: Solutions and Responsibilities - A comprehensive "digital immune system" is necessary, requiring collaboration among companies, users, regulators, and society [6]. - Companies like Baidu must prioritize "truthfulness" alongside "fluency" in their AI strategies, implementing mechanisms for source verification and fact-checking [6]. - Establishing stricter data cleaning processes and developing algorithms to detect and eliminate malicious information is essential [6]. Group 4: User Empowerment - Users should transition from passive information receivers to critical consumers, employing cross-verification as a fundamental practice [7]. - Utilizing existing fact-checking platforms and reporting false information generated by AI can contribute to improving the AI model [8]. Group 5: Regulatory Actions - Regulatory frameworks must keep pace with technological advancements, establishing legal boundaries for AI-generated content and imposing severe penalties for malicious activities [9][10]. - Collaboration among regulatory bodies and AI companies is crucial for effective governance and combating data poisoning [11]. Group 6: Overall Perspective - The situation is viewed as a "growing pain," highlighting the dual-edged nature of technology and the need for corporate responsibility and societal engagement [12].