假图骗取电商退款,洗脑驯化大模型,南都报告揭秘AI灰产
Nan Fang Du Shi Bao·2025-12-18 10:35

Core Insights - The rise of generative AI has led to an increase in AI-related fraud and misinformation, particularly in the e-commerce sector, highlighting the challenges of distinguishing truth from falsehood in a technologically advanced society [2][4] - A report released at the eighth Woodpecker Data Governance Forum reviews 118 cases of generative AI risks, focusing on the societal trust challenges and ethical dilemmas posed by human-AI interactions [4][5] Group 1: Impact on Society and Individuals - Generative AI has significantly altered the landscape of information production and dissemination, leading to an exponential increase in fake content across personal, industry, and societal levels [5] - AI-generated misinformation has resulted in various forms of fraud, including "AI yellow rumors" and scams targeting vulnerable populations, particularly the elderly [5][6] - The report highlights a case where a PhD student at the University of Hong Kong cited 24 AI-generated fake references in a paper, leading to its retraction and an investigation [6] Group 2: Legal and Ethical Concerns - Instances of lawyers using AI to generate fictitious legal cases have emerged, raising concerns about the integrity of legal proceedings [6] - The report discusses the emergence of a gray industry exploiting generative AI, manipulating data to influence AI model outputs, which can mislead users into believing the information is factual [7] - The ethical implications of AI's "flattering" algorithms are examined, particularly in the context of human-AI relationships and the potential for emotional manipulation [8] Group 3: Regulatory Responses and Recommendations - The report emphasizes the need for global consensus and institutional rules to address the challenges posed by AI-generated misinformation, advocating for stronger platform regulation and cross-border collaboration [7] - Recent lawsuits against AI platforms like Character.AI and OpenAI highlight the legal accountability issues surrounding AI interactions, particularly concerning youth safety [9][10] - Various countries are implementing regulations to protect minors from AI-induced harm, with recommendations for AI products to prioritize user mental health and transparency in design [11]