Workflow
AI谣言
icon
Search documents
如何遏制人工智能“说谎”
Ren Min Ri Bao· 2025-08-21 08:13
制造出的虚假信息更具迷惑性,"AI(人工智能)幻觉"带来"一本正经地胡说八道"……易传播、难防治 的"AI谣言",对政府监管部门、互联网平台、技术研发者以及社会各界提出了新挑战。 如何遏制人工智能"说谎"?既需要技术完善与规制,也需要系统性治理。多名受访专家表示,要完善法 律规范和部门监管,推动形成多方联动的共治格局。 亮明法律底线,增强对"AI谣言"的威慑力 近日,借AI造谣燃气涨价的广西网民郑某受到行政处罚;一批传播"AI谣言"的网络账号被网信部门责令 禁言或依法依约关闭;向用户传授如何利用AI"一键去衣""换脸换身"的账号、公开售卖相关软件的网络 店铺,受到网信、公安等部门联动惩治…… 依法惩治的背后,是近年来AI相关法律法规的不断健全完善。《互联网信息服务深度合成管理规定》 《生成式人工智能服务管理暂行办法》《人工智能生成合成内容标识办法》等出台,向AI使用者和平 台管理者亮明了法律底线,为执法提供了法律依据。 中南大学法学院副院长杨清望表示,"AI谣言"违法成本过低的问题仍然存在,要对不法行为严加惩处, 完善相关法律法规,增强威慑力。 北京师范大学新闻传播学院教授许小可建议,针对生成和传播虚假信息的 ...
“AI谣言”为何易传播难防治?(深阅读)
Ren Min Ri Bao· 2025-08-17 22:01
Core Viewpoint - The rapid development of AI technology has led to both conveniences and challenges, particularly in the form of AI-generated misinformation and rumors, prompting regulatory actions to address these issues [1]. Group 1: Emergence of AI Rumors - AI-generated misinformation can stem from malicious intent or "AI hallucination," where AI models produce erroneous outputs due to insufficient training data [2][3]. - "AI hallucination" refers to the phenomenon where AI systems generate plausible-sounding but factually incorrect information, often due to a lack of understanding of factual content [3]. Group 2: Mechanisms of AI Rumor Generation - Some individuals exploit AI tools to create and disseminate rumors for personal gain, such as increasing traffic to social media accounts [4]. - A case study highlighted a group that generated 268 articles related to a missing child, achieving over 1 million views on several posts [4]. Group 3: Spread and Impact of AI Rumors - The low barrier to entry for creating AI rumors allows for rapid and widespread dissemination, which can lead to public panic and misinformation during critical events [5][6]. - AI rumors can be customized for different platforms and audiences, making them more effective and harder to counteract [6]. Group 4: Challenges in Containing AI Rumors - AI-generated misinformation is more difficult to detect and suppress compared to traditional rumors, as they often closely resemble factual statements [8][9]. - Current technological measures to filter out misinformation are less effective against AI-generated content due to its ability to adapt and evade detection [9].
当谣言搭上“AI”的东风
3 6 Ke· 2025-06-12 09:09
Group 1 - The core viewpoint of the articles emphasizes the potential of AI identification systems in addressing the challenges of misinformation, while also acknowledging their technical limitations and the need for collaboration with existing content governance frameworks [1][2][3]. Group 2 - AI-generated harmful content has not fundamentally changed in nature but has been amplified by technology, leading to lower barriers for creation, increased volume of misinformation, and more convincing falsehoods [2][3]. - The rise of AI has enabled non-professionals to produce realistic fake content, as evidenced by reports of villagers generating articles using AI models for traffic revenue [2][5]. - The phenomenon of "industrialized rumor production" has emerged, where algorithms control AI to generate large volumes of misleading information [2]. Group 3 - The introduction of an AI identification system in China aims to address the challenges posed by low barriers, high volume, and realistic AI-generated content through a dual identification mechanism [3][4]. - The system includes explicit and implicit identification methods, requiring content generation platforms to embed metadata and provide visible labels for AI-generated content [3][4]. Group 4 - Theoretically, AI identification can enhance content governance efficiency by identifying AI-generated content earlier in the production process, thus improving risk management [4]. - Explicit identification labels can reduce the perceived credibility of AI-generated content, as studies show that audiences are less likely to trust or share content labeled as AI-generated [5][8]. Group 5 - Despite its potential, the effectiveness of AI identification systems faces significant uncertainties, including the ease of evasion, forgery, and misjudgment of AI-generated content [6][9]. - The costs associated with implementing reliable identification technologies can be high, potentially exceeding the costs of content generation itself [6][15]. Group 6 - The AI identification system should be integrated into existing content governance frameworks to maximize its effectiveness, focusing on preventing confusion and misinformation [6][7]. - The system's strengths lie in enhancing detection efficiency and user awareness, rather than making definitive judgments about content authenticity [7][8]. Group 7 - The identification mechanism should prioritize high-risk areas, such as rumors and false advertising, while allowing for more flexible governance in low-risk domains [8][9]. - Responsibilities between content generation and dissemination platforms need to be clearly defined, considering the technical challenges and costs involved in content identification [9][10].
当谣言搭上“AI”的东风
腾讯研究院· 2025-06-12 08:22
Group 1 - The article emphasizes the potential of the AI identification system in addressing the challenges of misinformation, highlighting its role as a crucial front-end support in content governance [1][4] - It points out that over 20% of the 50 high-risk AI-related public opinion cases in 2024 were related to AI-generated rumors, indicating a significant issue in the current content landscape [1][3] - The article discusses the three main challenges posed by AI-generated harmful content: lower barriers to entry, the ability for mass production of false information, and the increased realism of such content [3][4] Group 2 - The introduction of a dual identification mechanism, consisting of explicit and implicit identifiers, aims to enhance the governance of AI-generated content by covering all stakeholders in the content creation and dissemination chain [5][6] - The article notes that explicit identifiers can reduce the credibility of AI-generated content, as studies show that labeled content is perceived as less accurate by audiences [6][8] - It highlights the limitations of the AI identification system, including the ease of evasion, forgery, and misjudgment, which can undermine its effectiveness [8][9] Group 3 - The article suggests that the AI identification system should be integrated into the existing content governance framework to maximize its effectiveness, focusing on preventing confusion and misinformation [11][12] - It emphasizes the need to target high-risk areas, such as rumors and false advertising, rather than attempting to cover all AI-generated content indiscriminately [13][14] - The responsibilities of content generation and dissemination platforms should be clearly defined, considering the challenges they face in accurately identifying AI-generated content [14]
热搜爆了!刚刚,陈奕迅发文
21世纪经济报道· 2025-05-19 15:08
Core Viewpoint - The article discusses the false rumors surrounding the death of the singer Eason Chan, highlighting the impact of AI-generated misinformation and the need for regulatory measures in the industry [8][10]. Group 1: Incident Overview - On May 19, Eason Chan addressed rumors of his death by posting a humorous food picture on social media, stating "Revived and first eating meat" [1]. - The rumors originated from a video on a YouTube account called "台山TV," which used AI-generated content to spread false information about Chan's death due to COVID-19 complications [9][10]. - The misinformation quickly gained traction on social media, leading to widespread concern among fans [4][5]. Group 2: Response and Clarification - Multiple individuals connected to Eason Chan confirmed the rumors were false, with his record label, Universal Music, labeling the claims as "very boring rumors" [7]. - Despite initial decisions not to respond to the rumors to avoid giving them more attention, the escalating nature of the situation prompted Chan to personally clarify the matter [6]. Group 3: AI and Misinformation - The article emphasizes the growing issue of AI-generated rumors, noting that the technology, while beneficial, has also led to significant challenges, particularly for public figures [10]. - Calls for stronger legislative measures to regulate AI technology have been made by various public figures, indicating a recognition of the potential dangers posed by such advancements [10].
“陈奕迅去世”系假新闻!源头视频为AI造谣
21世纪经济报道· 2025-05-19 08:50
Core Viewpoint - The article discusses the false rumors surrounding the death of singer Eason Chan, highlighting the spread of misinformation through social media and the impact of AI technology on the dissemination of fake news [1][2][10]. Group 1: Rumor Confirmation - A singer from Eason Chan's team confirmed that the news of his death was false, stating that Eason Chan is in good health [2][4]. - The rumor originated from a YouTube account that posted a video with no credible information, merely using old photos and AI-generated voiceovers [5][6]. Group 2: Response from Authorities - The local health authority in Kaohsiung did not release any information regarding Eason Chan's supposed death, and claims of fans mourning were fabricated [8]. - Eason Chan's record label described the rumors as "very boring" and reassured fans that there was no need for concern [11]. Group 3: AI Technology and Misinformation - The article highlights the rapid development of AI technology, which has led to an increase in fake news and AI-generated content, causing distress for public figures [14]. - Other celebrities have also faced similar issues with AI-generated fake videos, prompting calls for stronger regulatory measures on AI technology [20][22].
“顶流明星在澳门输了10亿”系AI捏造!造谣者被行拘
21世纪经济报道· 2025-03-14 01:51
Core Viewpoint - The article discusses the rapid spread of AI-generated rumors and misinformation on social media platforms, highlighting the need for regulatory measures and content identification to combat this issue. Group 1: AI Rumors and Misinformation - A rumor about a top celebrity losing 1 billion in gambling in Macau was fabricated by an individual using AI tools, leading to widespread public discussion and disruption of public order [2] - Various social media platforms, including Weibo, have initiated measures to combat unmarked AI-generated content, focusing on areas such as social welfare and public emergencies [4][6] Group 2: Regulatory Measures and Platform Responses - Weibo announced a governance initiative to label AI-generated content, with penalties for accounts that repeatedly fail to disclose AI content [5][9] - Other major platforms like Douyin, Kuaishou, and WeChat have also implemented similar requirements for users to declare whether their content is AI-generated [6] Group 3: Challenges and Industry Impact - The proliferation of low-quality AI content poses significant challenges to content platforms, affecting user experience and the visibility of original creators [7] - Reports indicate that AI content farms are generating vast amounts of low-quality articles, with one case producing up to 19,000 articles daily across multiple accounts [8] Group 4: Legislative and Future Directions - The Chinese government is pushing for clearer identification of AI-generated content as part of its regulatory framework, with new guidelines expected to enhance the distinction between AI and real content [10] - Industry leaders, including Xiaomi's CEO Lei Jun, have called for legislative measures to address the misuse of AI technologies, particularly in areas like AI face-swapping and voice imitation [12][13]