Apollo9智能手环
Search documents
AI大模型遭大量“投毒”,暴露算法高危漏洞
21世纪经济报道· 2026-03-17 08:59
Core Viewpoint - The article highlights the issue of "poisoning" AI large models through the practice of GEO (Generative Engine Optimization), where service providers manipulate search results and disseminate false information to mislead users [1][2]. Group 1: GEO Market Dynamics - The GEO market is experiencing explosive growth as AI models replace traditional search engines, with a projected market size exceeding 42 billion RMB in 2024 and a compound annual growth rate (CAGR) of 38% [7]. - By 2025, the number of monthly active users of AI search in China is expected to surpass 600 million, with over 60% of enterprise users prioritizing AI Q&A platforms for supplier information [7]. Group 2: Mechanisms of "Poisoning" - GEO service providers create and disseminate fabricated promotional content, which can lead to AI models recommending non-existent products based on false information [3][4]. - The article describes a case where a fictitious product was created and promoted through GEO software, resulting in AI models providing recommendations based on this fabricated content [3][4]. Group 3: Industry Practices and Standards - The article distinguishes between "black hat GEO" practices, which involve deceptive tactics to manipulate AI models, and "white hat GEO," which claims to operate within legal and ethical boundaries [8]. - There is a lack of industry standards and regulatory oversight in the GEO sector, leading to the emergence of a gray market that exploits vulnerabilities in AI algorithms [1][10]. Group 4: Legal and Regulatory Challenges - The legal status of GEO practices remains ambiguous, with current laws not clearly defining the responsibilities of GEO service providers and AI platforms regarding misleading content [10][14]. - Experts suggest that if GEO service providers cause harm through misleading information, they may face consumer protection or tort liability, while AI model companies could also be held accountable if they allow such practices [14][15]. Group 5: Recommendations for Improvement - There is a call for increased regulation of GEO companies and the establishment of standards for the sources of training data used in AI models to ensure legality and legitimacy [15]. - Recommendations include enhancing the transparency and explainability of AI outputs, allowing consumers to assess the credibility of information provided by AI models [15].