虚假信息传播
Search documents
AI制作监控视频——“狗狗救孩子”火爆网络 虚拟世界如何做到真假可辨?
Yang Guang Wang· 2025-10-18 11:46
Core Viewpoint - The article discusses the rise of AI-generated videos that mislead viewers into believing they are real surveillance footage, highlighting the need for clearer labeling and regulation of such content [1][4][5]. Group 1: AI-Generated Content - A viral video titled "Dog Saves Child" is identified as AI-generated, misleading many viewers who believed it to be real surveillance footage, garnering 77,000 likes [1]. - Many similar AI-generated videos are labeled as "surveillance footage," but the disclaimers are often in small, inconspicuous text, leading to widespread misinformation [3][5]. Group 2: Regulatory Framework - The "Artificial Intelligence Generated Synthetic Content Identification Measures," effective from September 1, 2025, mandates explicit labeling of AI-generated content, requiring visible prompts at the beginning and around the video [3][4]. - Legal experts argue that the current labeling practices do not meet the legal requirement for "significant perception," as they are often placed in less noticeable areas [4][5]. Group 3: Digital Literacy and User Awareness - Experts emphasize the importance of digital literacy among internet users, advocating for the development of skills to identify AI-generated content and verify information through multiple sources [6]. - The article suggests that users should be trained to recognize AI-generated content and cross-check information to discern its authenticity [6].
研究:主流 AI 聊天机器人假消息传播概率猛增,较去年高出一倍
Sou Hu Cai Jing· 2025-09-15 06:31
Core Insights - The spread of false information by generative AI tools has increased significantly, with a 35% occurrence rate in August this year compared to 18% in the same month last year [1] - The introduction of real-time web search capabilities in chatbots has led to a decrease in refusal rates to answer user queries, dropping from 31% in August 2024 to 0% a year later, which has contributed to the dissemination of misinformation [1][4] Performance of AI Models - Inflection's model has the highest misinformation spread rate at 56.67%, followed by Perplexity at 46.67%, while ChatGPT and Meta's models have a misinformation rate of 40% [3] - The best-performing models are Claude and Gemini, with misinformation rates of 10% and 16.67% respectively [4] - Perplexity's performance has notably declined, with its misinformation spread rate rising from 0% in August 2024 to nearly 50% a year later [4] Challenges in Information Verification - The integration of web search was intended to address outdated responses from AI but has instead led to new issues, as chatbots now source information from unreliable sources [4] - Newsguard has identified a fundamental flaw in AI's approach, as early models avoided spreading misinformation by refusing to answer questions, but current models are now exposed to a polluted information ecosystem [4] AI's Limitations - OpenAI acknowledges that language models inherently produce "hallucinated content," as they predict the next likely word rather than seeking factual accuracy [5] - The company is working on new technologies to indicate uncertainty in future models, but it remains unclear if this will effectively address the deeper issue of misinformation spread by AI [5]
39岁南洋博士送外卖火了?美团回应!曝光流量暴涨背后细节
Bei Jing Shang Bao· 2025-07-10 14:02
Core Viewpoint - Meituan refutes claims regarding the educational background of its delivery riders, stating that such information lacks factual basis and is spread as false information for gaining attention [3][14]. Group 1: Meituan's Response - Meituan's official account clarified that any claims about the educational qualifications of riders, such as "30% of riders are undergraduates" or "70,000 master's degree holders," are unfounded and should be verified through official channels like the Ministry of Education or relevant educational institutions [3][14]. - The company emphasized that the total data regarding riders' educational backgrounds is not supported by facts and is merely speculative [3][14]. Group 2: Case of Ding XZ - Ding XZ, a 39-year-old individual claiming to have multiple prestigious degrees, has gained attention for his contrasting identity as a delivery rider [3][5]. - Meituan conducted an investigation into Ding XZ's delivery activity, revealing that he registered as a rider on February 15 and has only worked a few days, averaging about 2 hours of work per day with a total income of 174.3 yuan from 34 deliveries [3][4]. - Ding XZ's videos, which prominently feature his educational background, have seen a significant increase in viewership, particularly during a period of intense posting [5].