Workflow
虚假信息传播
icon
Search documents
马杜罗倒台后AI生成内容扩散 事实与虚构界限模糊
Xin Lang Cai Jing· 2026-01-06 12:32
核心要点 在美国军方采取行动推翻委内瑞拉领导人尼古拉斯・马杜罗之后,一批声称展现委内瑞拉民众街头欢庆 场景的人工智能生成视频在社交媒体上迅速走红。 该视频随后被 "X" 平台的 "社区备注" 功能标记。这一功能是平台依托用户众包开展的事实核查机制, 允许用户为其认为具有误导性的帖子补充背景信息。备注内容显示:"此视频由人工智能生成,目前被 当作真实事件传播,意在误导公众。" 这些由人工智能合成的视频片段描绘了民众欢呼雀跃的画面,在抖音、照片墙和 "X" 平台等主流社交 平台上累计获得数百万次观看。 这条视频的浏览量已超 560 万次,被至少 3.8 万个账号转发分享,其中包括商界大亨埃隆・马斯克,不 过马斯克最终删除了该转发内容。 "X" 平台上最早且传播最广的相关视频片段之一,由名为 "华尔街猿" 的账号发布,该账号在平台上拥 有超过 100 万粉丝。 美国消费者新闻与商业频道(CNBC)未能核实该视频的来源,但英国广播公司(BBC)和法新社 (AFP)的事实核查人员表示,已知的该视频最早版本出现在抖音账号 "@美国好奇智库",该账号经常 发布人工智能生成内容。 这条帖子呈现了多名委内瑞拉民众喜极而泣的场景 ...
日本节目散布不实信息煽动对华焦虑情绪,日媒发现:所谓“消息源”查无此人
Huan Qiu Wang· 2025-12-24 04:05
【环球网报道】据日本《读卖新闻》12月24日报道,关于日本网络电视"ABEMA"节目中出现的所谓"和歌山县的9个水源中有7个被中国相关人士购买"说 法,和歌山县知事宫崎泉表示,这是"错误信息,虚假信息"。《读卖新闻》称,"ABEMA"声称该信息的消息源是一名"和歌山县县议会议员",但该媒体 采访了所有县议员,他们均称没有进行过有关调查,且没有接受过"ABEMA"的采访。 值得注意的是,《读卖新闻》称,该媒体采访了节目播出时所有42名时任县议员,他们均称没有进行过有关调查,且没有接受过"ABEMA"的采访。 报道援引日本立命馆大学社会情报学准教授谷源司的说法称,"有关外国人的问题容易在网络上博得眼球。在接收信息时应留意哪一部分是事实,哪一 部分是发布信息的人的主张。" 《读卖新闻》称,水源这一说法非常模糊,但无论这里的水源指的是供水基础设施、泉水还是河流源头,"ABEMA"节目的说法都说不通。和歌山县供 水部门负责人称,"河流是无法被购买的,也没有供水基础设施被购买的情况。" 《读卖新闻》还称,"ABEMA"节目制作人11月27日回应了该媒体的质询,制作人承认信息"存在夸张成分",但辩称"你不能确定地说消息人士 ...
女主播悬赏20万找救命恩人却被拘?
Huan Qiu Wang· 2025-12-08 07:28
网络世界里,一些人奉行流量至上。为了吸粉引流,有人演"深情",有人演"仗义",背后却都是算计好 的镜头语言、情绪节奏和流量密码,为了获得利益。这些虚假摆拍,披着正能量的外衣,实则透支公众 信任、消费公众善意、操纵舆论情感,扰乱社会秩序。 今年8月下旬,多个网络平台上出现一个名为"宋雨霏"的主播发布的所谓"寻找救命恩人"的视频,在视 频中该主播声称自己在海边遭遇溺水,幸亏有一名陌生男子奋不顾身将她救下,而为了找到这位救命的 好心人,主播愿出重金悬赏寻找恩人。 该短视频在网上迅速引发大量网友转发,此后该账号发布的系列短视频累计播放量近3000万次,短视频 账号短时间暴涨数万粉丝,然而,正当网友们为这一故事感动的时候,事件的走向发生了反转。 来源:济南公安 编造曲折剧情 摆拍11条"20万寻恩人"视频 网络主播 宋雨霏:今天录制这条视频,我想恳求大家帮个忙,如果那天你也在海边,又或者你认识这 个小伙子,你看到了整件事情的发生经过,只要你有他的线索,请第一时间联系我,只要信息真实,我 愿意花20万元重金酬谢。 "重金悬赏""知恩图报"的剧情设定,成为该账号撬动流量的关键。此后一个多月的时间里,"宋雨霏"这 个账号连续 ...
阿努廷总理下令严打网络诈骗 猜差诺部长推三项重点措施
Shang Wu Bu Wang Zhan· 2025-10-31 16:40
Core Viewpoint - The Thai government, led by Prime Minister Anutin, is prioritizing the crackdown on cybercrime, particularly focusing on online fraud, misinformation, online gambling, and digital asset money laundering, which are seen as significant threats to society and the economy [1] Group 1: Key Measures - The first measure involves proactive prevention, which includes collaboration with the National Broadcasting and Telecommunications Commission and operators to cut off illegal networks and mobile signals in border areas, along with strict legal actions against involved parties, including those in political and public sectors [1] - The second measure focuses on data integration, aiming to enhance data sharing and analysis with police and the National Cyber Security Agency to establish a centralized database and real-time alert system for improving case tracking and victim compensation efficiency [1] - The third measure is about improving legislation, which includes researching the establishment of a "Cyber Fraud Prevention Agency" to strengthen relevant laws and penalties [1] Group 2: Operational Strategy - The operational strategy of the Ministry of Digital Economy and Society emphasizes "proactive prevention, rapid response, and victim protection," with close cooperation with the Ministry of Interior and the Royal Thai Police to combat fraud gangs, scammers, and online gambling activities, ensuring public safety and stability in the digital society [1]
AI制作监控视频——“狗狗救孩子”火爆网络 虚拟世界如何做到真假可辨?
Yang Guang Wang· 2025-10-18 11:46
Core Viewpoint - The article discusses the rise of AI-generated videos that mislead viewers into believing they are real surveillance footage, highlighting the need for clearer labeling and regulation of such content [1][4][5]. Group 1: AI-Generated Content - A viral video titled "Dog Saves Child" is identified as AI-generated, misleading many viewers who believed it to be real surveillance footage, garnering 77,000 likes [1]. - Many similar AI-generated videos are labeled as "surveillance footage," but the disclaimers are often in small, inconspicuous text, leading to widespread misinformation [3][5]. Group 2: Regulatory Framework - The "Artificial Intelligence Generated Synthetic Content Identification Measures," effective from September 1, 2025, mandates explicit labeling of AI-generated content, requiring visible prompts at the beginning and around the video [3][4]. - Legal experts argue that the current labeling practices do not meet the legal requirement for "significant perception," as they are often placed in less noticeable areas [4][5]. Group 3: Digital Literacy and User Awareness - Experts emphasize the importance of digital literacy among internet users, advocating for the development of skills to identify AI-generated content and verify information through multiple sources [6]. - The article suggests that users should be trained to recognize AI-generated content and cross-check information to discern its authenticity [6].
研究:主流 AI 聊天机器人假消息传播概率猛增,较去年高出一倍
Sou Hu Cai Jing· 2025-09-15 06:31
Core Insights - The spread of false information by generative AI tools has increased significantly, with a 35% occurrence rate in August this year compared to 18% in the same month last year [1] - The introduction of real-time web search capabilities in chatbots has led to a decrease in refusal rates to answer user queries, dropping from 31% in August 2024 to 0% a year later, which has contributed to the dissemination of misinformation [1][4] Performance of AI Models - Inflection's model has the highest misinformation spread rate at 56.67%, followed by Perplexity at 46.67%, while ChatGPT and Meta's models have a misinformation rate of 40% [3] - The best-performing models are Claude and Gemini, with misinformation rates of 10% and 16.67% respectively [4] - Perplexity's performance has notably declined, with its misinformation spread rate rising from 0% in August 2024 to nearly 50% a year later [4] Challenges in Information Verification - The integration of web search was intended to address outdated responses from AI but has instead led to new issues, as chatbots now source information from unreliable sources [4] - Newsguard has identified a fundamental flaw in AI's approach, as early models avoided spreading misinformation by refusing to answer questions, but current models are now exposed to a polluted information ecosystem [4] AI's Limitations - OpenAI acknowledges that language models inherently produce "hallucinated content," as they predict the next likely word rather than seeking factual accuracy [5] - The company is working on new technologies to indicate uncertainty in future models, but it remains unclear if this will effectively address the deeper issue of misinformation spread by AI [5]
39岁南洋博士送外卖火了?美团回应!曝光流量暴涨背后细节
Bei Jing Shang Bao· 2025-07-10 14:02
Core Viewpoint - Meituan refutes claims regarding the educational background of its delivery riders, stating that such information lacks factual basis and is spread as false information for gaining attention [3][14]. Group 1: Meituan's Response - Meituan's official account clarified that any claims about the educational qualifications of riders, such as "30% of riders are undergraduates" or "70,000 master's degree holders," are unfounded and should be verified through official channels like the Ministry of Education or relevant educational institutions [3][14]. - The company emphasized that the total data regarding riders' educational backgrounds is not supported by facts and is merely speculative [3][14]. Group 2: Case of Ding XZ - Ding XZ, a 39-year-old individual claiming to have multiple prestigious degrees, has gained attention for his contrasting identity as a delivery rider [3][5]. - Meituan conducted an investigation into Ding XZ's delivery activity, revealing that he registered as a rider on February 15 and has only worked a few days, averaging about 2 hours of work per day with a total income of 174.3 yuan from 34 deliveries [3][4]. - Ding XZ's videos, which prominently feature his educational background, have seen a significant increase in viewership, particularly during a period of intense posting [5].