AI内容标识

Search documents
深圳CA携手保险机构共筑可信AI生态
Zheng Quan Ri Bao Wang· 2025-09-12 07:13
据悉,在本次合作中,深圳CA将国密算法与抗量子算法融合,打造面向未来的混合证书认证系统,为 AI大模型生成内容提供全生命周期保护,在内容生成、存证和传播过程中,关键元数据(生成者证书 ID、模型版本、时间戳、内容哈希等)均被签名并绑定内容,实现不可篡改、全程可追溯。 本报讯 (记者李冰)日前,金融壹账通旗下深圳CA(深圳市电子商务安全证书管理有限公司)已与某 大型保险机构合作,利用自身在数字身份认证和国密算法等技术实践优势,共同探索互联网内容的可信 标识和溯源体系,保障保险机构的内容数据在多方流转共享中,拥有"可信认证"和"责任认定"能力;让 每个面向公众、合作伙伴的文案、图片、视频或文件都具备"可信数字身份",实现内容不可篡改、责任 可认定,确保内容的可信和溯源。这项能力让保司在利用AI赋能业务中更有安全保障。 截至目前,全国已有490余款大模型完成国家网信办备案,240余款通过省级登记,生成式AI产品用户 规模达到2.3亿人。显而易见,AI内容标识已成为企业合规经营的必备基础,推动行业向更规范、有序 方向发展。 同时,多模态智能审核系统整合安全大模型技术,精准识别深度伪造内容和虚假信息,为AI内容安全 建 ...
多平台上线AI内容标识功能 助力行业转向规范创新
Sou Hu Cai Jing· 2025-09-04 06:11
Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" aims to regulate the production and dissemination of AI-generated content, enhancing transparency and accountability across various platforms [1][6]. Group 1: Implementation by Platforms - Major platforms such as Tencent, Douyin, DeepSeek, Weibo, Bilibili, and Kuaishou have announced the launch of AI-generated content identification features to comply with the new regulations [1][3]. - Douyin requires users to actively add explicit identification when publishing AI-generated content and will verify unmarked content for AI generation [3]. - DeepSeek has implemented identification for AI-generated content and prohibits users from maliciously altering or hiding these identifiers [3]. - Bilibili and Weibo have also introduced similar identification features, ensuring that unmarked AI-generated content will be flagged according to community rules [4]. Group 2: Challenges and Considerations - Platforms face multiple challenges in implementing these measures, including the need for advanced computational power and algorithms to handle vast amounts of content identification [5]. - There is a necessity for user education to reduce resistance to the identification process and to ensure compliance with the new regulations [5]. - Experts suggest that a dual identification system (explicit and implicit) will significantly enhance content transparency and help users quickly identify AI-generated information [6][7]. Group 3: Impact on the Industry - The new measures are expected to reshape the current AI content ecosystem by enforcing a clear identification system that can help mitigate the spread of false information and deepfakes [6][7]. - The identification requirements will likely disrupt low-quality content generation models while favoring high-quality, human-assisted content creation [7]. - In the long term, the identification system is anticipated to establish a trustworthy foundation for the healthy development of the AI industry, shifting it from "wild growth" to "regulated innovation" [7].
AI内容强制标识“首周”实测:抖音、小红书、微博自动识别“失灵” AI应用文本漏标 视频“会员可去水印”
Mei Ri Jing Ji Xin Wen· 2025-09-03 19:48AI Processing
9月1日,备受关注的《人工智能生成合成内容标识办法》(以下简称《标识办法》)正式施行,其核心目标只有一 个:要求所有由AI生成的内容都必须"亮明身份",以维护信息传播的真实与透明。 对此,《每日经济新闻》记者(以下简称"每经记者")在新规施行后,对豆包、DeepSeek、可灵等多款主流AI应用 及内容传播平台进行了深度实测后发现,该办法在落地执行层面遭遇了挑战。 在生成端,AI生成的文本内容,显式标识普遍缺失;部分AI视频生成工具,甚至将"去水印"作为一项付费会员服 务。 在传播端,小红书、抖音、微博三大主流内容平台,在测试中均未能对AI生成内容进行自动识别并添加标识。 对此,《标识办法》主要起草人之一、浙江大学计算机学院院长任奎在接受每经记者独家专访时指出,当前标识的 嵌入与检测技术碎片化严重,如何构建统一的标识合规检测体系,推动通用化、互认的标识技术,仍然面临着严峻 的挑战。 生成端的"隐身术":文本标识缺失与"会员去水印" 《标识办法》规定,AI生成的内容标识可分为两种:一种是用户能够直接感知的显式标识,如文字、角标等;另一 种是不易被用户察觉的隐式标识,如嵌入文件数据的技术标识。 那么,在新规施行后, ...
DeepSeek等大模型集体“打标”,从此告别AI造假?
虎嗅APP· 2025-09-02 14:00
Core Viewpoint - The article discusses the implementation of the "Artificial Intelligence Generated Content Identification Measures," which mandates that all AI-generated content must be clearly labeled to protect users, especially those with limited discernment abilities, from misinformation and deception [8][44][65]. Group 1: AI Content Identification - Starting September 1, the "Artificial Intelligence Generated Content Identification Measures" requires all AI-generated content to be labeled, ensuring transparency [8]. - Major AI model companies like Tencent, ByteDance, and Alibaba have already begun updating their user agreements to comply with AI content labeling [6][7]. - The measures apply to various forms of content, including text, images, audio, and video, and require both service providers and users to adhere to labeling protocols [9][10]. Group 2: Impact on Users - The article highlights the growing concern over the ability of users, particularly the elderly, to discern AI-generated content from real content, leading to potential emotional and financial exploitation [16][22]. - Examples are provided where individuals were misled by AI-generated videos, illustrating the risks associated with the lack of clear identification [18][20]. - The introduction of AI content labels is seen as a necessary step to protect vulnerable groups from being misled by AI-generated misinformation [22][43]. Group 3: Global Context and Challenges - The article compares the new measures in China with similar regulations in countries like South Korea and Spain, noting that the U.S. lacks comprehensive federal regulations on AI content labeling [45][46]. - The challenges of enforcing AI content identification are acknowledged, with concerns that voluntary compliance by tech companies may not be sufficient to address the proliferation of misleading AI content [47][61]. - The article cites data indicating that human influencers earn significantly more than AI-generated content creators, highlighting the ongoing struggle for authenticity in the creator economy [63].
AI生成内容 为什么要强制标识
Zhong Guo Qing Nian Bao· 2025-04-27 02:23
Core Viewpoint - The rapid development of artificial intelligence (AI) is reshaping information production and societal interactions, necessitating content identification to mitigate risks associated with unmarked AI-generated content [1][5]. Group 1: Obligations of AI Service Providers - AI service providers have a duty to identify content to control and prevent specific dangers associated with AI-generated information [2]. - As the core entities in technology development, AI service providers influence the nature and societal impact of generated content through their algorithms and data training [2][3]. - The requirement for content identification transforms technical risks into traceable legal responsibilities, addressing questions of "what is generated," "who generated it," and "where it was generated" [3]. Group 2: Social Welfare Maximization - The governance of AI should aim to maximize social welfare while balancing the interests of individual rights and public benefits [4]. - Implementing content identification obligations is intended to achieve the greatest happiness for the majority, ensuring minimal restrictions on individual pursuits of happiness [4][5]. Group 3: Virtue Ethics - Content identification reflects the virtues of honesty and creditworthiness, guiding companies to cultivate responsible and prudent behaviors [6][8]. - The requirement for AI-generated content to be marked encourages companies to self-regulate according to virtue standards, maintaining the authenticity and reliability of information ecosystems [9].