Workflow
递归污染
icon
Search documents
21社论丨完善数据治理,推动人工智能产业健康发展
21世纪经济报道· 2026-03-19 00:10
Core Viewpoint - The article highlights the emergence of AI "poisoning," a gray industry chain that manipulates artificial intelligence by injecting false data, leading to the generation of misleading information and potential risks for AI applications [1][2]. Group 1: AI Poisoning and Its Implications - AI "poisoning" refers to the deliberate contamination of data to mislead AI models, which can result in the promotion of non-existent products and misinformation [1]. - Research indicates that even a mere 0.01% of false text in training data can increase harmful outputs by 11.2%, demonstrating that minimal data pollution poses significant challenges to model safety [2]. - The phenomenon of "recursive pollution" occurs when contaminated data is absorbed by models, leading to the generation of further polluted content, creating a self-perpetuating cycle that degrades model quality [2][3]. Group 2: Need for Data Quality Governance - Addressing data pollution is more challenging than the initial contamination, requiring substantial resources for verification and filtering, which often cannot completely eliminate the impact [3]. - Proactive and systematic approaches are necessary to prevent recursive pollution and maintain the cognitive capabilities of AI models, as prolonged exposure to low-quality information can lead to permanent degradation of model performance [3]. - Current regulations, such as the "Interim Measures for the Management of Generative Artificial Intelligence Services," need to be deepened to better address emerging issues in AI development, emphasizing preventive measures against data pollution [3].