【法治天地】 “买通”AI涉嫌违法 应多方协同强化治理
Zheng Quan Shi Bao·2026-01-19 18:00

Core Viewpoint - The emergence of services that manipulate AI recommendations, known as GEO services, raises concerns about the objectivity of AI in product recommendations and poses significant risks to market integrity and consumer trust [1][2]. Group 1: Issues with AI Recommendations - Consumers have reported that AI responses may include specific brand recommendations, questioning the neutrality of AI [1]. - GEO optimization services allow businesses to manipulate AI recommendations by flooding AI data sources with misleading content, which undermines consumer trust and market order [1][2]. - Such practices violate legal standards, including the Internet Advertising Management Measures and the Anti-Unfair Competition Law, by obscuring advertising identification and promoting false information [1]. Group 2: Consequences of Manipulated Recommendations - The manipulation of AI recommendations can mislead consumers, leading to poor purchasing decisions and harming honest businesses [2]. - This situation creates a "bad money drives out good money" scenario, where trustworthy brands suffer due to the prevalence of deceptive practices [2]. - Long-term, this could erode consumer trust in AI, devaluing brands and damaging the digital consumption ecosystem [2]. Group 3: Solutions and Recommendations - AI platforms must enhance algorithm review mechanisms and improve data source tracing to identify and block misleading marketing content [2]. - Regulatory bodies need to establish clear boundaries for GEO services and enforce laws against businesses engaging in deceptive practices [2]. - A collaborative effort among platforms, regulators, and the industry is essential to restore the neutrality of AI recommendations and protect consumer rights [2].

【法治天地】 “买通”AI涉嫌违法 应多方协同强化治理 - Reportify