AI造假
Search documents
AI带货视频“批量化”生产 “AI李鬼”游走在灰色地带
Zhong Guo Qing Nian Bao· 2025-11-24 23:55
AI带货视频"批量化"生产 "AI李鬼"游走在灰色地带 一条视频中,两名"儿童"身着白色短袖,在阳光下开心地玩滑板。镜头一转,两人在动作交错时,身体 竟然融合在一起,暴露出数字合成痕迹。 今年"双11"期间,北京市民王欢在某社交平台刷到这条童装推广视频,深感诧异,直到看见角落一 行"内容由AI生成"的提示,她才意识到孩子玩耍的画面是技术合成的。"这画面错得有些离谱,让人怎 么相信商品的真实效果?"她疑惑地说。 近来,商家利用AI技术带货的现象日益增多,有的出现违法违规情形:有借助名人效应,仿冒专家学 者、奥运冠军、影视明星为产品站台;有冒充知名品牌误导消费者。不少消费者和粉丝难辨真伪,出于 对"偶像"或"品牌"的信任下单,却没意识到背后潜藏的风险。 这些游走在灰色地带的"AI李鬼",涉嫌侵害当事者的肖像权以及消费者的知情权与选择权,对当下网络 数字治理提出新挑战。 AI带货视频"批量化"生产 "两人穿透彼此身体,明显违背真实世界的物理规律,在行业中我们称之为'穿模'。"资深AI技术人员蓝 天说。 他指出,当前主流AI视频生成模型,其核心逻辑是在海量数据训练基础上,具备较强的语言理解和图 像生成能力,能够根据文 ...
生成式AI不能沦为造假工具
Jing Ji Ri Bao· 2025-11-20 22:16
Core Viewpoint - The recent incident involving an actor facing "AI impersonation" has sparked renewed public discussion about the implications of artificial intelligence, particularly in the context of content generation and potential misuse [1][2]. Group 1: AI Misuse and Public Concerns - The rapid development of generative AI has made video production accessible without specialized skills, leading to misuse such as fake buyer reviews and fraudulent content targeting vulnerable populations [1]. - The incident serves as a warning about the dangers of AI being used as a tool for deception rather than creativity and efficiency [1]. Group 2: Regulatory Measures - The "Artificial Intelligence Generated Synthetic Content Identification Measures," effective from September, mandates explicit and implicit labeling of AI-generated content to help users identify misleading information [1][2]. - Despite the implementation of these measures, some AI content remains unmarked, misleading audiences and necessitating a more robust governance framework [2]. Group 3: Recommendations for Governance - A multi-layered governance system is essential to combat AI-related fraud, including clearer legal standards for penalties, defined responsibilities among service providers, platforms, and users, and enhanced regulatory efforts [2]. - Upgrading technical capabilities for high-precision detection of fraudulent content is crucial for effective identification and mitigation of AI-generated deception [2].
【西街观察】警惕AI生成的“仅退款”羊毛党
Bei Jing Shang Bao· 2025-11-18 15:02
Core Points - The rise of fraudulent refund claims in the e-commerce sector is causing significant distress for honest merchants, as some buyers exploit AI tools to create fake evidence of product damage [1][2] - The misuse of AI technology undermines the original intent of the "refund only" policy, which aims to enhance consumer experience while protecting merchants' rights [2] Group 1 - The fraudulent practices involve manipulating product images to appear damaged or spoiled, affecting a range of low-cost items, leading to frequent but small-scale financial losses for merchants [1] - Merchants face a dilemma where the cost of defending against these claims often exceeds the value of the products involved, highlighting the challenges in the current e-commerce landscape [1][2] - The introduction of new regulations in September aims to clarify the use of AI-generated content, prohibiting malicious alterations and ensuring that merchants can protect their rights [2] Group 2 - E-commerce platforms are urged to enhance AI image recognition technology and implement stricter review mechanisms to combat the rise of AI-assisted fraud [2] - As disputes over "refund only" policies continue to increase, many merchants are adjusting their after-sales strategies to better navigate the challenges posed by AI misuse [2]
女演员温峥嵘被AI盗播带货,直播间质问反被拉黑,平台该担责吗?
Xin Lang Cai Jing· 2025-11-10 07:22
Core Viewpoint - The incident involving actress Wen Zhengrong highlights the urgent need for legal and regulatory intervention regarding the unauthorized use of AI-generated images for commercial purposes, raising questions about platform accountability [2][4][6] Group 1: Celebrity Rights Protection - Celebrities must follow a structured approach to protect their rights, starting with evidence collection, such as saving screenshots of AI broadcasts and links to infringing products [2][3] - Legal actions can be taken against merchants for infringing on portrait rights and name rights, with the Civil Code providing a solid legal basis for such claims [3][4] Group 2: Platform Responsibilities - Platforms cannot evade responsibility and must implement preemptive measures, such as using technology to identify AI-generated content and verifying identities in live broadcasts [4][6] - Upon receiving reports of infringement, platforms are required to act within 24 hours to remove infringing content, as stipulated by the E-commerce Law [4][6] Group 3: Legal Framework and Enforcement - The Civil Code and E-commerce Law provide a clear legal framework for rights holders to notify platforms and enforce their rights against unauthorized use of AI [4][5] - Regulatory bodies need to increase penalties for violations, as demonstrated by past cases where companies were fined for impersonating public figures [5][6] Group 4: Challenges and Solutions - The covert nature of AI fraud complicates enforcement, but proactive monitoring and technological upgrades are essential for platforms to prevent misuse [5][6] - Collective action among celebrities, platforms, and regulatory authorities is necessary to effectively combat the misuse of AI technology [6]
温峥嵘被AI温峥嵘拉黑:AI发展莫要助长“以假乱真”
Yang Zi Wan Bao Wang· 2025-11-06 06:30
AI盗播带来的困局早已超出个人权益范畴。对温峥嵘等公众人物而言,AI合成的虚假直播侵犯肖像权 与声音权,误导消费者对其商业合作的认知,而侵权内容批量生成、账号频繁更换的特点,让维权取证 成本高昂。对普通消费者,基于明星信任下单的商品可能是假货,遭遇侵权后却因责任主体难追溯而维 权无门。更严重的是,这种乱象正在透支整个数字生态的信任,当"AI造假"成为常态,正规直播、合法 代言也会被打上问号,形成劣币驱逐良币的恶性循环。 技术发展不应以牺牲真实性为代价。今年9月施行的《人工智能生成合成内容标识办法》,明确要求AI 生成内容必须添加显著标识,为治理乱象提供了政策依据。但现实中,部分商家将"AI生成"标识藏于角 落或用技术手段遮挡,平台审核滞后让侵权内容有机可乘,监管与技术落地之间还存在差距。破解困局 需要政策、技术与平台形成合力:监管层面应细化标识标准,明确处罚力度;技术上可推广区块链数字 身份证、不可篡改的隐式水印等溯源技术;平台则需升级多模态审核系统,对未合规标识的内容从严处 置。 当技术进步让"以假乱真"变得轻而易举,更需要用规则与责任为其划清边界。不仅要有让AI生成内 容"亮明身份",更要让侵权行为付出代 ...
专访雅为科技杨乔雅:当AI开始“造谣”,技术被“投毒”,谁来监督
Sou Hu Cai Jing· 2025-11-02 13:19
Core Viewpoint - The discussion centers around the issue of AI, particularly large language models like Baidu's, generating false information and the ethical implications of this phenomenon [2][3]. Group 1: AI's "Fabrication" Issue - The term "fabrication" in AI is referred to as "hallucination," where AI generates plausible but incorrect information due to flawed training data or insufficient information [3]. - The frequent occurrence of factual errors in AI products from platforms with millions of users leads to a public trust crisis, potentially distorting public perception and disrupting market order [3][4]. Group 2: Risks of Data Poisoning - The risk of malicious actors feeding AI with false information to harm competitors is identified as a form of "data poisoning," representing an asymmetric gray war [4][5]. - Attackers can disseminate carefully crafted false information across various online platforms, which AI then learns from, ultimately presenting these as objective answers to unsuspecting users [4][5]. Group 3: Solutions and Responsibilities - A comprehensive "digital immune system" is necessary, requiring collaboration among companies, users, regulators, and society [6]. - Companies like Baidu must prioritize "truthfulness" alongside "fluency" in their AI strategies, implementing mechanisms for source verification and fact-checking [6]. - Establishing stricter data cleaning processes and developing algorithms to detect and eliminate malicious information is essential [6]. Group 4: User Empowerment - Users should transition from passive information receivers to critical consumers, employing cross-verification as a fundamental practice [7]. - Utilizing existing fact-checking platforms and reporting false information generated by AI can contribute to improving the AI model [8]. Group 5: Regulatory Actions - Regulatory frameworks must keep pace with technological advancements, establishing legal boundaries for AI-generated content and imposing severe penalties for malicious activities [9][10]. - Collaboration among regulatory bodies and AI companies is crucial for effective governance and combating data poisoning [11]. Group 6: Overall Perspective - The situation is viewed as a "growing pain," highlighting the dual-edged nature of technology and the need for corporate responsibility and societal engagement [12].
管住AI造假,留住社会信任
Ke Ji Ri Bao· 2025-10-17 01:09
Core Points - A notable case of using artificial intelligence (AI) for false advertising has been reported in Beijing, where a company falsely claimed its product could treat various diseases during a live broadcast, while it was merely a regular food product [1] - The incident involved the AI-generated likeness of a well-known CCTV host, highlighting the growing misuse of AI technology to create realistic fake videos [1] - The emergence of AI deepfake technology poses significant challenges to content safety and erodes the foundation of social trust, as it allows for the creation of deceptive representations of public figures [1] Industry Response - In September, China implemented the "Artificial Intelligence Generated Synthetic Content Identification Measures," requiring all AI-generated content to include explicit identification and encouraging the use of digital watermarks for implicit identification [1] - Regulatory bodies are urged to enhance oversight and enforcement against platforms and individuals violating these regulations, as demonstrated by the recent actions taken by Beijing's market supervision department [1] - Content dissemination platforms and AI service providers are expected to fulfill their responsibilities by improving AI recognition technology and enhancing the ability to trace and verify content authenticity [2] Public Awareness - The public is encouraged to remain vigilant and improve their ability to discern the authenticity of information to avoid being misled by false information [2] - The rapid development of AI technology in China necessitates the continuous improvement of safety standards and legal guidelines for various application scenarios [2] - A collaborative effort is required from all stakeholders to restore the integrity of the online space and safeguard the foundation of social trust [2]
网信、公安重点整治AI造假、挑动负面情绪等乱象
Zhong Guo Xin Wen Wang· 2025-10-10 05:58
Core Points - The article discusses the crackdown on online rumors and misinformation related to public policies, disasters, and social issues, highlighting the misuse of AI tools to create false narratives and the impact on public order and individual rights [1][2][3] Group 1: Online Misinformation - In September, rumors related to disasters and floods were prevalent, with exaggerated claims about typhoons and fabricated videos circulating on social media [2] - Specific instances include false reports about a typhoon in Guangdong and misleading videos about severe weather in Zhengzhou, which were generated using AI technology [2] Group 2: Fraudulent Activities - Criminals have exploited the situation by creating fake announcements about government subsidies and investment opportunities, leading to scams that compromise personal information and financial security [1] - Examples include a fraudulent app posing as an investment platform and misleading claims about national projects offering rewards [1] Group 3: Government Response - The Central Cyberspace Administration has initiated a special campaign to address issues related to inciting negative emotions and spreading panic, targeting platforms that fail to manage content responsibly [3] - Law enforcement has taken action against individuals spreading false narratives, including those fabricating stories for sensationalism [3]
伪造官方项目 夸大灾情信息 演绎悲情剧本 网信、公安重点整治AI造假、挑动负面情绪等乱象
Yang Shi Wang· 2025-10-10 05:28
悲情故事类谣言也时有发生。有自媒体为博眼球、吸流量,策划拍摄"大凉山姑娘被拐24年后回到亲人 身边"的视频。另有编造"中国籍女子嫁到外国贫民窟求助回国"剧情,用苦情戏码吸引网民、博取关 注。这些行为欺骗网民感情,渲染负面情绪。 中央网信办近日部署开展"清朗·整治恶意挑动负面情绪问题"专项行动,重点整治挑动群体极端对立情 绪、宣扬恐慌焦虑情绪、挑起网络暴力戾气、过度渲染消极悲观情绪等问题。同时,网信部门针对微 博、快手、今日头条、UC等平台未落实信息内容管理主体责任,依法查处上述平台破坏网络生态案 件。公安机关依法打击造谣传谣违法行为,假冒官方名义编造虚假项目、传播"大凉山姑娘被拐24年后 回到亲人身边""中国籍女子嫁到外国贫民窟求助回国"等造谣传谣者已被依法处罚。 央视网消息:据"网信中国"公众号消息,9月网络谣言主要集中在公共政策、灾情汛情、社会民生等领 域,伪造政策文件、虚构悲情故事、滥用AI工具编造虚假灾情,侵害民众权益、扰乱公共秩序。网 信、公安等部门严厉打击造谣传谣行为,持续净化网络环境。 有不法分子编造"2025年国家薪资补贴申领认证通知",以"国家补贴"为幌子,诱骗群众点击链接,套取 实名信息实 ...
用AI伪造门店照片,“假门面”带不来真流量
Xin Jing Bao· 2025-09-15 09:44
Core Points - The rise of AI-generated images is misleading consumers in the food delivery industry, creating a false sense of popularity for certain restaurants [1][2] - Many food delivery platforms have not effectively addressed the issue of AI-generated storefronts, leading to consumer deception and potential food safety concerns [3][4] Group 1 - AI technology is being used by some merchants to create fake storefronts and attract customers, despite the actual conditions being vastly different [1] - The use of AI-generated images is cost-effective and easy to implement, making it an attractive option for businesses looking to increase sales [1] - Consumers are misled by these AI-generated images, which compromises their rights and increases their consumption costs [2] Group 2 - Some food delivery platforms have acknowledged the issue but have not taken sufficient action to prevent the use of AI-generated images [3] - There is a need for food delivery platforms to enhance their governance and create a trustworthy consumer environment [3] - Both e-commerce and food delivery platforms should develop technological tools to combat AI-generated deception, requiring accountability from platforms and stronger regulatory oversight [3][4]