AI生成合成内容

Search documents
涉AI案件增长较快 向文娱、金融、广告营销等产业领域渗透
2 1 Shi Ji Jing Ji Bao Dao· 2025-09-10 07:14
Core Viewpoint - The rapid growth of artificial intelligence (AI) has led to an increase in related legal disputes, which are becoming more complex and diverse, necessitating a higher level of judicial expertise and adaptability in legal frameworks [1][2][3]. Group 1: Legal Challenges and Trends - The uncertainty of technology has escalated risks, with high technical barriers making fact-finding difficult, thus raising the demand for judicial professionalism [2]. - Existing legal documents are lagging behind the fast-paced development of technology, presenting new challenges for judicial personnel [2][5]. - The identification of responsibility is a focal point of concern for both the industry and the judiciary, as the complex AI industry chain involves various roles such as trainers, developers, service providers, and users [2][8]. Group 2: Characteristics of AI-Related Cases - AI-related cases are expanding beyond the internet sector into traditional industries such as entertainment, finance, and advertising [4]. - The rapid innovation of AI products and services introduces new and complex legal risks, including issues like AI hallucinations and algorithmic opacity [4]. - Judicial rulings in AI cases not only address technical and legal aspects but also play a significant role in guiding technology ethics, innovation incentives, and rights protection [4]. Group 3: Specific Legal Cases and Regulations - The first nationwide "AI voice rights case" highlighted the complexity of determining responsibility among multiple parties involved in AI data training and model development [8]. - The implementation of the "Artificial Intelligence Generated Synthetic Content Labeling Measures" on September 1 has made the marking of AI-generated content mandatory, placing responsibilities on both users and platforms [5][7]. - Social platforms are adopting "AI detection" technologies to label suspected synthetic content, raising concerns about misclassification of genuine works and the impact on content distribution [6]. Group 4: Recommendations for Developers and Providers - Developers are advised to enhance AI transparency and improve the accuracy and reliability of generated content by implementing effective measures throughout the algorithm design and data training processes [9]. - Providers should fulfill their responsibilities as network information content producers, taking prompt action against illegal content and ensuring compliance with content labeling obligations [9].
AIGC标识办法9月开始实施,平台、大模型公司响应“加水印”
Bei Ke Cai Jing· 2025-09-03 06:15
9月1日,此前由国家互联网信息办公室、工业和信息化部等部门发布的《人工智能生成合成内容标识办 法》(以下简称"《标识办法》"),以及配套《标识办法》发布的强制性国家标准《网络安全技术人工 智能生成合成内容标识方法》正式开始施行。 《标识办法》于今年3月发布,要求所有利用人工智能技术生成、合成的文本、图片、音频、视频、虚 拟场景等信息,都必须依法添加相应的身份标识。 新京报贝壳财经记者注意到,截至9月2日,内容平台腾讯、抖音、B站、快手,以及大模型企业 DeepSeek、商汤等均发布了响应添加标识的公告。值得注意的是,作为电商平台,抖音电商安全与信 任中心也发布了相关公告,公布了包括仿冒名人声音带货、生成与现实不符的商品虚假宣传等的典型案 例,强调AI应用必须合规,任何误用、滥用行为将被严厉打击。 内容平台:积极落实《标识办法》各项要求 在《标识办法》正式施行前夕,大部分主流内容平台已提前发布了公告。 大模型公司:已在平台内对AI生成合成内容添加标识 9月1日,DeepSeek公告称,为贯彻落实《标识办法》等国家标准的相关要求,防止AI生成内容可能引 发的公众混淆、误认以及信息失实的风险,DeepSeek已在平 ...
详解标识新规:如何给AI内容强制上“户口”
经济观察报· 2025-09-02 11:42
9月1日起,与AI行业紧密相连的法规和标准正式落地实施:其一为今年3月印发的《人工智能生成 合成内容标识办法》(下称《标识办法》);其二是为配套《标识办法》而出台的强制性国家标准 《网络安全技术 人工智能生成合成内容标识方法》(GB45438-2025)。 当互联网被AI生成的虚假信息充斥,我们仿佛置身于被哈哈镜 环绕的世界,逐渐丧失对真实的判断力,这无疑是一个极其危 险的信号。而此次出台的《标识办法》及配套国家标准,就是 要给所有AI生成的内容打上类似身份证的标签,让人们能够一 眼分辨哪些内容是人类创作的,哪些是AI生成的。 作者: 张仁杰 封图:图虫创意 法规和标准的实施影响深远,与从业者和使用者均息息相关,称其将改变整个互联网上AI内容的 生态也不为过。这套法规组合拳旨在解决一个日益严峻的问题:AI正以海量真假难辨的内容"污 染"我们的信息渠道,诸如之前的"三只羊事件"、利用AI克隆声音造谣抹黑他人甚至实施诈骗等乱 象,防不胜防。 当互联网被AI生成的虚假信息充斥,我们仿佛置身于被哈哈镜环绕的世界,逐渐丧失对真实的判 断力,这无疑是一个极其危险的信号。而此次出台的《标识办法》及配套国家标准,就是要给所有 ...
详解标识新规:如何给AI内容强制上“户口”
Jing Ji Guan Cha Wang· 2025-09-02 10:36
Core Viewpoint - The implementation of regulations and standards related to AI-generated content aims to address the increasing issue of misinformation and content pollution on the internet, providing a framework for identifying AI-generated content and ensuring accountability among creators and users [1][5]. Group 1: Explicit Identification - All AI model or application providers must add explicit identification when generating content, clearly indicating that it is AI-generated [2][3]. - For text content, explicit identification must be placed at the beginning, middle, or end, using terms like "AI" or "generated" [2]. - Image content requires identification in the corner, with text height not less than 5% of the shortest side of the image [2]. - Audio content must announce "generated by AI" at the beginning or end, or use a specific Morse code rhythm [2]. - Video content follows similar rules to images, requiring identification in the corner and lasting at least 2 seconds [3]. - All interactive interfaces of AI models or applications must also display identification [3]. - Organizations and individuals are prohibited from maliciously deleting or altering these identifications [3]. Group 2: Implicit Identification - Implicit identification serves regulatory purposes and must be included in the metadata of generated content, containing information such as whether it is AI-generated, who generated it, and a unique content number [6][7]. - The regulations encourage the addition of digital watermarks to enhance traceability [6]. - The requirement for implicit identification is mandatory, while digital watermarks are encouraged but not compulsory [7]. Group 3: Compliance and Responsibilities - Product entrepreneurs and AI tool providers are responsible for ensuring compliance with the new regulations, including automatic addition of explicit and implicit identifications [8][9]. - User agreements must clearly inform users about the existence and legal requirements of these identifications [9]. - If services are provided through APIs, the service provider must ensure that implicit identification is included in the generated data [10]. - Users must declare and label any AI-generated content, regardless of the proportion of AI involvement [10][11]. - Actions to remove implicit identification may lead to penalties, including reduced account credibility [11]. Group 4: Impact on Content Creation - The regulations are favorable for serious content creators while posing challenges for those spreading misinformation [11]. - The introduction of implicit identification, such as digital watermarks, serves as a deterrent against the spread of false information and enhances content accountability [11].
媒体人注意!这些内容必须加“水印”
Xin Jing Bao· 2025-09-02 10:28
Group 1 - The core viewpoint of the article is that starting from September 1, AI-generated content must be clearly labeled to avoid legal risks, marking a significant regulatory shift in the management of AI content in China [1][2][4] - The "Identification Measures for AI-Generated Synthetic Content" is not just a technical standard but a crucial part of the national strategy for AI content governance, responding to the rapid growth of AI technology and its associated risks [2][4] - As of now, the user base for generative AI products in China has reached 230 million, with over 490 large models registered with the National Cyberspace Administration [2] Group 2 - The "Identification Measures" impose three core requirements for AI content labeling: explicit labeling, implicit labeling, and platform responsibility, which mandates platforms to verify content labels before publication [5][6] - Major platforms like WeChat, Douyin, and Xiaohongshu are actively implementing labeling features, with WeChat providing guidelines for both platform labeling and user declarations [6][9] - Content creators, including individual bloggers and media organizations, must reassess their content production processes to establish effective AI content labeling mechanisms [9][12] Group 3 - The implementation of the "Identification Measures" signifies a transition from "develop first, regulate later" to "regulate while developing," indicating a new phase in the regulation of generative AI in China [12]
DeepSeek公告:强化AI内容标识,防止信息误导
Xin Lang Ke Ji· 2025-09-01 09:45
Group 1 - DeepSeek announced the implementation of content identification for AI-generated synthetic content to comply with national standards effective from September 1, 2025 [1] - The platform has added labels to AI-generated content to prevent public confusion and misinformation, and users are prohibited from maliciously deleting or altering these labels [1] - DeepSeek released a document detailing the principles and training methods of its AI models to ensure user awareness and control, aiming to mitigate risks associated with misuse [1] Group 2 - The company plans to continue optimizing its labeling mechanism to enhance user experience and provide more reliable and secure AI services [1]
今天,AI内容新规正式实施,这次不注意是真的会违法。
数字生命卡兹克· 2025-09-01 01:05
Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" and the accompanying national standard on September 1st is expected to significantly alter the ecosystem of AI-generated content on the internet, addressing the growing issue of indistinguishable fake content flooding information channels [3][5][10]. Group 1: Regulatory Framework - The new regulations require all domestic AI model or application providers to label AI-generated content with either explicit or implicit identifiers [15][31]. - Explicit identifiers must clearly indicate that the content is AI-generated, with specific requirements for text, images, audio, and video formats [18][20][27][29]. - Implicit identifiers, which are meant for machine and regulatory recognition, must be embedded in the file metadata and include essential information such as whether the content is AI-generated, the producer's identity, and a unique identifier for the content [43][54]. Group 2: Responsibilities of AI Providers - AI tool providers must upgrade their products to automatically include both explicit and implicit identifiers in any generated content [57]. - User agreements must be modified to inform users about the existence and legal requirements of these identifiers [59]. - Providers can offer exemptions for specific professional needs, but must ensure that users understand their responsibilities regarding labeling and maintain logs of user identities for at least six months [60][61]. Group 3: Responsibilities of Content Creators - Content creators using AI tools must actively utilize the provided labeling functions when publishing content that includes AI-generated elements [62][66]. - Even if only a small portion of the content is AI-generated, creators are required to declare and label it accordingly [67]. - Creators should avoid actions that could remove implicit identifiers, as this could lead to penalties from content platforms [69]. Group 4: Industry Impact - The new regulations are seen as beneficial for serious content creators while posing challenges for those who misuse AI for misinformation or scams [70]. - The introduction of digital watermarks and implicit identifiers aims to enhance regulatory oversight and reduce the prevalence of low-quality AI-generated content on the internet [71].
整治AI滥用行动开展逾月,抖音微博小红书等交“成绩单”
Nan Fang Du Shi Bao· 2025-06-12 09:41
Group 1 - The core viewpoint of the articles is the ongoing regulatory efforts in China to address the misuse of AI technology, focusing on cleaning up illegal AI products and enhancing content identification [1][2][3] - The "Clear and Rectify AI Technology Misuse" campaign is divided into two phases, with the first phase emphasizing source governance and the second phase targeting the dissemination of false information and inappropriate content [1][2] - Various platforms, including Weibo and Douyin, have reported significant actions taken, such as Weibo cleaning 162 pieces of content related to AI face-swapping tutorials and closing 22 accounts, while Douyin removed 24,749 pieces of false and vulgar AI-generated content [1][2] Group 2 - The Shanghai Municipal Cyberspace Administration has guided 15 key platforms, including Xiaohongshu and Bilibili, to clean up illegal AI products and related marketing information, resulting in the interception of over 820,000 pieces of illegal information and the disposal of over 1,400 accounts [2] - In the area of training data governance, Baidu has cleaned its data using authoritative sources, while Baichuan Intelligence has stopped using questionable data sources and established strict web scraping regulations [2] - The management of AI content identification is a priority, with new regulations set to take effect on September 1, which will require platforms to implement explicit and implicit identification for AI-generated content [3][5] Group 3 - Nearly 60 companies in Beijing have already implemented content identification requirements, with platforms like Weibo and Douyin achieving dual labeling of generated content [5] - Companies such as MiniMax and Xiaohongshu have completed the implementation of explicit identification standards and are working on implicit identification and verification [5] - MiniMax has developed technical solutions to add metadata for implicit identification in downloadable files and plans to enhance content recognition through additional markers in AI-generated text and audio [5]