AI“投毒”
Search documents
315曝光的“AI投毒”原理:GEO这样操控大模型推荐
量子位· 2026-03-16 11:33
Core Viewpoint - The article discusses the emergence of a gray industry related to AI "poisoning," where fake products are promoted through AI-generated content, highlighting the risks of misinformation in AI systems [2][11][60]. Group 1: AI "Poisoning" and GEO - AI "poisoning" refers to the systematic injection of false or misleading information into AI models to manipulate their outputs [11][12]. - Generative Engine Optimization (GEO) is a strategy aimed at enhancing the visibility of brands in AI-generated responses, similar to traditional SEO but focused on AI platforms [6][9][10]. - The process of AI "poisoning" involves three main technical methods: training data pollution, retrieval context hijacking, and prompt injection attacks [13][32]. Group 2: Technical Methods of AI "Poisoning" - **Training Data Pollution**: This method involves altering publicly available knowledge sources to embed false information into AI training data, leading to long-term biases in AI outputs [16][19]. - **Retrieval Context Hijacking**: Attackers manipulate the retrieval process by flooding the internet with content that is more likely to be selected by AI, creating an information monopoly [22][27]. - **Prompt Injection Attacks**: This technique involves embedding biased prompts in external information sources, influencing AI responses based on the injected content [33][36]. Group 3: The Process of AI "Poisoning" - The AI "poisoning" process consists of content production, channel distribution, and effect reinforcement, where attackers generate numerous promotional articles using AI [37][45]. - Attackers utilize a network of self-media accounts across various platforms to create the illusion of widespread discussion about a product [46][53]. - Continuous monitoring of AI responses is essential for attackers to adjust their strategies and ensure their content remains influential [58][60].