Workflow
新华视点·关注AI造假丨耸人听闻背后的生意经——揭开AI造谣利益链
Xin Hua She·2025-09-16 11:05

Core Viewpoint - The rise of AI-generated misinformation poses significant challenges for social governance, with a growing trend of individuals exploiting AI tools to create and disseminate false information for financial gain [1][2][3]. Group 1: AI Misinformation Trends - The use of AI for creating false information has become increasingly common, with various cases reported by law enforcement agencies across China [2][3]. - A report from Tsinghua University indicates that since 2023, the volume of AI-generated rumors has surged, particularly in the economic and public safety sectors, with food delivery and logistics being heavily affected [2][3]. - AI technology enhances the realism of online rumors, often accompanied by fabricated images and videos, making them more deceptive [2][3]. Group 2: Commercialization of Misinformation - The motivation behind AI-generated rumors often stems from the desire to monetize internet content through creator rewards and advertising revenue [4][5]. - Some individuals have been found to generate thousands of misleading posts daily, with potential earnings exceeding 10,000 yuan per day [4]. - The emergence of a black market for AI misinformation is driven by competitive business practices, where companies hire individuals to create negative content about rivals [5]. Group 3: Governance and Regulation - The Chinese government has initiated various actions to combat online misinformation, including a nationwide campaign to address false information related to enterprises and public welfare [5][6]. - Experts suggest that a comprehensive governance framework is necessary to effectively tackle AI-generated misinformation, involving collaboration across multiple sectors [6]. - Legal experts emphasize the need for a balanced approach to regulation, ensuring that innovation in AI technology is not stifled while addressing misuse [6][7].