事实核查
Search documents
腾讯新闻何毅进:AI 时代 从“内容平台”进化为“可信生态”
Yang Guang Wang· 2025-12-18 09:17
Core Insights - The article emphasizes the dual nature of AI in the information industry, highlighting the challenges of misinformation and cognitive limitations while advocating for a shift towards trustworthy and collaborative information ecosystems [1][2]. Group 1: Trustworthiness as a Foundation - Trustworthiness is identified as the most scarce resource in the AI era, necessitating a credible content ecosystem where each content account has a dedicated quality score that can dynamically change based on content quality [2]. - A mechanism for fact-checking is proposed, utilizing AI capabilities to cross-verify content and monitor the spread of misinformation, thereby assisting human experts in identifying deep fakes and misleading information [2]. Group 2: User Experience and Transparency - Future information products should incorporate features for "information traceability," allowing users to verify the original sources of key facts and data, thus enhancing their ability to assess information accuracy [3]. Group 3: Collaborative Empowerment - Information products must evolve from mere information providers to cognitive collaborators, organizing information into coherent narratives that provide context and connections, thereby reducing cognitive barriers for users [4]. - The article suggests that information products should present multiple perspectives on contentious issues, encouraging users to engage with diverse viewpoints and develop empathy [4]. Group 4: Stimulating Curiosity and Engagement - The article outlines practical tools to enhance user engagement, such as visual aids for data news, logic fallacy highlighting, and interactive features that allow users to pose questions and explore hypothetical scenarios [5][6]. - Tencent News has already made significant strides in implementing these concepts, having removed 95% of low-quality content over the past four years and providing extensive fact-checking services [6]. Group 5: Future Direction of the Industry - The overarching goal is to build a credible ecosystem, become cognitive collaborators, and encourage critical thinking, positioning Tencent News as a leader in navigating the challenges posed by the AI wave in the information industry [6].
谁在给你的脑子「投毒」
投资界· 2025-11-25 07:23
Core Viewpoint - The article discusses the pervasive issue of false information on the internet, highlighting how it is generated and disseminated through various channels, including social media and AI technologies, creating a complex gray industry that profits from misinformation [4][5][20]. Group 1: Information Pollution - The average Chinese individual spends nearly 8 hours online daily, encountering around 1,000 pieces of information, with a conservative estimate suggesting that hundreds of these are false [4]. - In June 2025, there were approximately 1.85 million reports of online illegal and harmful information across the country [4]. - False content acts like a mental fog, subtly contaminating public perception and trust [4]. Group 2: Mechanisms of Misinformation - The article details how individuals and companies create false narratives, including scriptwriting and video production, with some earning between 70,000 to 900,000 yuan monthly [5][12]. - A specific case involves a character named "Taozi," who produces videos that appear authentic but are scripted and staged, often involving actors portraying delivery personnel and customers in fabricated scenarios [6][9]. - The content often exploits emotional narratives to engage viewers, leading to significant interaction and shares on social media platforms [7][8]. Group 3: Economic Incentives - The production of false narratives is driven by financial incentives, with creators earning money through advertisements and viewer engagement [20][21]. - For instance, "Taozi" can earn around 70,000 yuan monthly from advertisements alone, in addition to revenue from viewer interactions [20]. - The article also mentions a company that utilizes AI to generate and distribute misleading content, highlighting the profitability of such operations [35][36]. Group 4: Social Impact - The spread of false information not only misrepresents individuals but also fosters societal divisions and stigmatizes certain groups, such as delivery workers [22][24]. - The article cites specific incidents where misinformation led to public outrage and personal harm, illustrating the real-world consequences of online falsehoods [23][25]. - It emphasizes the challenge of fact-checking, as misinformation often spreads faster and more widely than corrections can be issued [43][44]. Group 5: AI's Role in Misinformation - AI technologies are increasingly used to generate false information, with studies indicating that even a small percentage of false data in training sets can significantly increase harmful outputs [26][32]. - The article discusses how AI-generated content can manipulate public perception and even influence international relations, as seen in the context of the Ukraine conflict [33][34]. - Companies are leveraging AI to automate the creation of misleading narratives, further complicating the landscape of information integrity [35][36].
谁在给你的脑子「投毒」?
36氪· 2025-11-23 02:08
Core Viewpoint - The article discusses the pervasive issue of misinformation on the internet, highlighting how fake content is produced and disseminated, often for profit, and the detrimental effects it has on public perception and trust in information sources [3][22][54]. Group 1: Information Pollution - The average Chinese individual spends nearly 8 hours online daily, encountering approximately 1,000 pieces of information, with a conservative estimate suggesting that hundreds of these are false [3][4]. - Misinformation spreads rapidly through social media and short videos, often misleading millions within minutes [4][22]. - The article identifies a gray industry behind the creation of fake content, where individuals script, film, and distribute misleading videos, earning substantial incomes [5][22][30]. Group 2: Fake Content Production - The production of fake videos often involves actors portraying scenarios that evoke sympathy or moral outrage, which are then monetized through views and advertisements [9][21][22]. - A specific example includes a character named "Taozi," who creates scripted videos featuring fake delivery scenarios that mislead viewers into believing they are real [7][9][18]. - The article reveals that these fake narratives are designed to provoke emotional responses, leading to increased engagement and revenue from advertisements [14][20][22]. Group 3: The Role of AI - AI technology is increasingly used to generate and spread misinformation, with studies indicating that even a small percentage of false data in training sets can significantly increase harmful outputs [26][30]. - Companies are leveraging AI to automate the creation of misleading content, which can be distributed across various platforms for profit [30][32]. - The use of AI in misinformation not only affects individual reputations but can also disrupt public discourse and influence societal perceptions [29][30]. Group 4: Impact on Society - The proliferation of fake information contributes to societal divisions and the stigmatization of certain groups, such as delivery workers, by fabricating narratives that create conflict [22][44]. - Misinformation can lead to real-world consequences, including public outrage and harm to individuals' reputations, as seen in various high-profile cases [25][44]. - The article emphasizes the challenge of fact-checking in the face of overwhelming misinformation, where false narratives often gain traction faster than corrections can be disseminated [46][54].
马斯克的 AI 百科全书 Grokipedia 存逐字照搬维基百科现象
Sou Hu Cai Jing· 2025-10-28 03:25
Core Insights - Grokipedia is designed in a very basic manner, resembling Wikipedia, with a large search box on the homepage and simple article formats that include titles, subtitles, and sources [2] - The platform claims that its content has been "fact-checked" by Grok, which raises concerns due to the tendency of large language models to fabricate "facts" [4] - Some articles on Grokipedia appear to plagiarize content from Wikipedia, with explicit statements indicating that certain content is adapted from Wikipedia under a Creative Commons license [5] - Grokipedia's existence relies heavily on Wikipedia, as highlighted by a spokesperson from the Wikimedia Foundation, emphasizing the importance of Wikipedia's transparent policies and volunteer oversight [7][8] - Grokipedia currently has over 885,000 articles, while Wikipedia maintains approximately 7 million English pages, indicating that Grokipedia is still in its early version stage [8] Content and Claims - Grokipedia's articles often lack clear attribution for fact-checking, and the specific timing of such checks is not indicated [4] - The platform's treatment of controversial topics, such as climate change, diverges from established scientific consensus, suggesting a potential bias in its content [7] - Grokipedia is currently at version 0.1, indicating that it is still in the developmental phase and may undergo significant changes [8]
Meta监督委员会批评平台政策调整 称其或引发负面影响
Huan Qiu Wang· 2025-04-23 08:08
Core Viewpoint - The independent oversight board of Meta Platforms Inc. criticized the company's policy adjustments made in January, stating that the reduction in fact-checking and the easing of restrictions on controversial topics could lead to "potential negative impacts" [1][3]. Group 1: Policy Adjustments - Meta's January reforms included the abrupt announcement of changes that deviated from standard procedures and lacked public disclosure regarding human rights due diligence [3]. - The reforms focused on reducing fact-checking efforts and loosening restrictions on discussions surrounding controversial topics such as immigration and gender identity [3][4]. - CEO Mark Zuckerberg has been working to mend relations with former President Trump, leading to the rollback of measures aimed at reducing hate speech, misinformation, and incitement to violence [3]. Group 2: Oversight Board's Concerns - The oversight board expressed concerns about the social implications of Meta's policy adjustments and urged the company to evaluate the potential impacts [3]. - In the board's rulings on initial user content cases following the January reforms, some controversial content related to transgender bathroom use was allowed to remain, while content containing racist remarks was ordered to be removed [3]. - Meta's spokesperson welcomed the board's decisions that promote free speech but did not comment on the rulings requiring content removal [3]. Group 3: Lack of Supporting Data - Zuckerberg claimed that previous governance measures led to "too many errors and over-censorship," but the company did not provide specific examples or error rate data to support this assertion [4]. - The oversight board's criticism may prompt Meta to reassess the rationale behind its policy adjustments [4].