事实核查
Search documents
腾讯新闻何毅进:AI 时代 从“内容平台”进化为“可信生态”
Yang Guang Wang· 2025-12-18 09:17
透明化的用户体验设计同样关键。何毅进设想,未来的资讯产品应提供便捷的"信息溯源"功能,用户在阅读时,任何关键事实和数据都能点击查看原始 出处,清晰了解信息来自官方通报、现场直击、专家解读还是当事人陈述,同时看到信源的数字档案,让用户能够自主判断信息的准确性,真正做到"心中 有数"。 在AI技术席卷资讯行业的今天,我们正面临信息极度丰饶与认知深度焦灼并存的困境。一方面,AI生成内容的便捷性使得深度伪造、批量造谣泛滥成 灾,从名人换脸到专业报告失真,获取真实准确信息的成本不降反升;另一方面,传统算法在AI加持下加剧了信息茧房、认知浅化和观点极化,用户视野 窄化,理性讨论空间被压缩。 面对这一挑战,腾讯新闻负责人何毅进在2025腾讯ConTech大会上指出,AI是双刃剑,关键在于使用者的选择。他提出,优质资讯产品必须完成双重进 化:从"内容平台"进化为"可信生态",从"信息推送者"升级为"认知协作者"。其最高价值不是填满用户时间,而是点亮用户思维。 可信为基:精品资讯的立身之本 在何毅进看来,可信度是AI时代最稀缺的资源,资讯产品必须以可信内容生态为基座。打造这一生态,需要让每一个内容账号都拥有专属的信息质量 数字 ...
谁在给你的脑子「投毒」
投资界· 2025-11-25 07:23
Core Viewpoint - The article discusses the pervasive issue of false information on the internet, highlighting how it is generated and disseminated through various channels, including social media and AI technologies, creating a complex gray industry that profits from misinformation [4][5][20]. Group 1: Information Pollution - The average Chinese individual spends nearly 8 hours online daily, encountering around 1,000 pieces of information, with a conservative estimate suggesting that hundreds of these are false [4]. - In June 2025, there were approximately 1.85 million reports of online illegal and harmful information across the country [4]. - False content acts like a mental fog, subtly contaminating public perception and trust [4]. Group 2: Mechanisms of Misinformation - The article details how individuals and companies create false narratives, including scriptwriting and video production, with some earning between 70,000 to 900,000 yuan monthly [5][12]. - A specific case involves a character named "Taozi," who produces videos that appear authentic but are scripted and staged, often involving actors portraying delivery personnel and customers in fabricated scenarios [6][9]. - The content often exploits emotional narratives to engage viewers, leading to significant interaction and shares on social media platforms [7][8]. Group 3: Economic Incentives - The production of false narratives is driven by financial incentives, with creators earning money through advertisements and viewer engagement [20][21]. - For instance, "Taozi" can earn around 70,000 yuan monthly from advertisements alone, in addition to revenue from viewer interactions [20]. - The article also mentions a company that utilizes AI to generate and distribute misleading content, highlighting the profitability of such operations [35][36]. Group 4: Social Impact - The spread of false information not only misrepresents individuals but also fosters societal divisions and stigmatizes certain groups, such as delivery workers [22][24]. - The article cites specific incidents where misinformation led to public outrage and personal harm, illustrating the real-world consequences of online falsehoods [23][25]. - It emphasizes the challenge of fact-checking, as misinformation often spreads faster and more widely than corrections can be issued [43][44]. Group 5: AI's Role in Misinformation - AI technologies are increasingly used to generate false information, with studies indicating that even a small percentage of false data in training sets can significantly increase harmful outputs [26][32]. - The article discusses how AI-generated content can manipulate public perception and even influence international relations, as seen in the context of the Ukraine conflict [33][34]. - Companies are leveraging AI to automate the creation of misleading narratives, further complicating the landscape of information integrity [35][36].
谁在给你的脑子「投毒」?
36氪· 2025-11-23 02:08
Core Viewpoint - The article discusses the pervasive issue of misinformation on the internet, highlighting how fake content is produced and disseminated, often for profit, and the detrimental effects it has on public perception and trust in information sources [3][22][54]. Group 1: Information Pollution - The average Chinese individual spends nearly 8 hours online daily, encountering approximately 1,000 pieces of information, with a conservative estimate suggesting that hundreds of these are false [3][4]. - Misinformation spreads rapidly through social media and short videos, often misleading millions within minutes [4][22]. - The article identifies a gray industry behind the creation of fake content, where individuals script, film, and distribute misleading videos, earning substantial incomes [5][22][30]. Group 2: Fake Content Production - The production of fake videos often involves actors portraying scenarios that evoke sympathy or moral outrage, which are then monetized through views and advertisements [9][21][22]. - A specific example includes a character named "Taozi," who creates scripted videos featuring fake delivery scenarios that mislead viewers into believing they are real [7][9][18]. - The article reveals that these fake narratives are designed to provoke emotional responses, leading to increased engagement and revenue from advertisements [14][20][22]. Group 3: The Role of AI - AI technology is increasingly used to generate and spread misinformation, with studies indicating that even a small percentage of false data in training sets can significantly increase harmful outputs [26][30]. - Companies are leveraging AI to automate the creation of misleading content, which can be distributed across various platforms for profit [30][32]. - The use of AI in misinformation not only affects individual reputations but can also disrupt public discourse and influence societal perceptions [29][30]. Group 4: Impact on Society - The proliferation of fake information contributes to societal divisions and the stigmatization of certain groups, such as delivery workers, by fabricating narratives that create conflict [22][44]. - Misinformation can lead to real-world consequences, including public outrage and harm to individuals' reputations, as seen in various high-profile cases [25][44]. - The article emphasizes the challenge of fact-checking in the face of overwhelming misinformation, where false narratives often gain traction faster than corrections can be disseminated [46][54].
马斯克的 AI 百科全书 Grokipedia 存逐字照搬维基百科现象
Sou Hu Cai Jing· 2025-10-28 03:25
Core Insights - Grokipedia is designed in a very basic manner, resembling Wikipedia, with a large search box on the homepage and simple article formats that include titles, subtitles, and sources [2] - The platform claims that its content has been "fact-checked" by Grok, which raises concerns due to the tendency of large language models to fabricate "facts" [4] - Some articles on Grokipedia appear to plagiarize content from Wikipedia, with explicit statements indicating that certain content is adapted from Wikipedia under a Creative Commons license [5] - Grokipedia's existence relies heavily on Wikipedia, as highlighted by a spokesperson from the Wikimedia Foundation, emphasizing the importance of Wikipedia's transparent policies and volunteer oversight [7][8] - Grokipedia currently has over 885,000 articles, while Wikipedia maintains approximately 7 million English pages, indicating that Grokipedia is still in its early version stage [8] Content and Claims - Grokipedia's articles often lack clear attribution for fact-checking, and the specific timing of such checks is not indicated [4] - The platform's treatment of controversial topics, such as climate change, diverges from established scientific consensus, suggesting a potential bias in its content [7] - Grokipedia is currently at version 0.1, indicating that it is still in the developmental phase and may undergo significant changes [8]
Meta监督委员会批评平台政策调整 称其或引发负面影响
Huan Qiu Wang· 2025-04-23 08:08
Core Viewpoint - The independent oversight board of Meta Platforms Inc. criticized the company's policy adjustments made in January, stating that the reduction in fact-checking and the easing of restrictions on controversial topics could lead to "potential negative impacts" [1][3]. Group 1: Policy Adjustments - Meta's January reforms included the abrupt announcement of changes that deviated from standard procedures and lacked public disclosure regarding human rights due diligence [3]. - The reforms focused on reducing fact-checking efforts and loosening restrictions on discussions surrounding controversial topics such as immigration and gender identity [3][4]. - CEO Mark Zuckerberg has been working to mend relations with former President Trump, leading to the rollback of measures aimed at reducing hate speech, misinformation, and incitement to violence [3]. Group 2: Oversight Board's Concerns - The oversight board expressed concerns about the social implications of Meta's policy adjustments and urged the company to evaluate the potential impacts [3]. - In the board's rulings on initial user content cases following the January reforms, some controversial content related to transgender bathroom use was allowed to remain, while content containing racist remarks was ordered to be removed [3]. - Meta's spokesperson welcomed the board's decisions that promote free speech but did not comment on the rulings requiring content removal [3]. Group 3: Lack of Supporting Data - Zuckerberg claimed that previous governance measures led to "too many errors and over-censorship," but the company did not provide specific examples or error rate data to support this assertion [4]. - The oversight board's criticism may prompt Meta to reassess the rationale behind its policy adjustments [4].