AI造假
Search documents
黄晓明回应“在澳门输掉十几亿”
Zhong Guo Ji Jin Bao· 2026-02-21 10:28
有网友直言:"这AI造假也太离谱了!路人都看懵了,这造谣也太没边了吧。" 此前有消息称,黄晓明在澳门赌博输掉十几亿元。黄晓明针对该传言做出澄清,揭露实为AI拼接的虚假新闻。 在近期播出的一档综艺节目中,黄晓明首次正面回应了这一离谱假新闻。他解释道:"这是AI做的假新闻。" 在节目中,黄晓明表示,这则关于自己在澳门输掉十几亿元的新闻是由AI技术伪造的假新闻。尽管这则消息一听就是假新闻,但事后很多人却相信了, 对于很多人来说,伤害挺大的。 黄晓明借此机会提醒观众,当前AI技术可生成逼真假新闻,部分虚假内容会影射具体人物,大家要擦亮双眼,不要轻信这类假消息。 多数网友批评造谣者利用公众认知盲区,滥用AI技术刻意伪造新闻,造谣成本太低。 【导读】假!黄晓明辟谣"在澳门输掉十几亿":是AI生成的虚假内容 中国基金报记者 张舟 2月21日,关于"黄晓明回应在澳门输掉十几个亿"的消息冲上微博热搜,引发网友广泛热议。 亦有多名博主指出,此类AI造假内容难以辨别,需警惕AI滥用的隐患。 据悉,该传闻源于2025年3月份,彼时一则"超一线男星在澳门永利皇宫贵宾厅连续七日豪赌,输光10.3亿元流动资产"的消息在互联网平台传播,当时 ...
卖家AI美图,买家AI索赔:电商平台AI攻防战
3 6 Ke· 2026-01-19 11:24
Core Insights - The rise of AI technology is leading to a significant increase in fraudulent activities on e-commerce platforms, undermining trust between merchants and consumers [1][2] - A mature fraud chain has emerged, where "wool party" users generate fake defect images using AI tools to request refunds without returning products, exploiting low-cost and low-skill barriers [2][3] - Merchants are also using AI for deceptive practices, such as enhancing product images and using virtual models, which mislead consumers about the actual quality of products [5][6] Group 1: Fraudulent Activities - The "wool party" users create fake defect images using AI tools like Nano Banana and Midjourney, allowing them to claim refunds while keeping the products [2] - The low entry barrier for AI-generated images contrasts with traditional photo editing, making it easier for fraudsters to operate [2][3] - Fraudulent activities have evolved from individual cases to organized, professional operations, with clear divisions of labor among fraudsters [4] Group 2: Merchant Responses - Larger companies have legal teams and strategies to combat fraud, while smaller merchants often lack resources and choose to compromise due to high legal costs [3][4] - Many small merchants report that the cost of legal action exceeds the losses incurred, leading to a lack of effective recourse [3][4] - Merchants are increasingly facing challenges as fraud becomes more organized and sophisticated, making it difficult to protect their interests [3][4] Group 3: Legal and Regulatory Framework - Current legal frameworks provide avenues for victims to seek redress, but enforcement is often weak, and cases rarely lead to significant penalties for fraudsters [9][11] - There is a call for improved legal standards and unified judicial interpretations to address AI-related fraud effectively [11] - Recommendations include the establishment of timestamp services and AI image verification to aid in evidence collection and reduce the burden on victims [11]
AI无限拉低了普通人造假的门槛
36氪· 2025-12-22 09:30
镜相工作室 . 以下文章来源于镜相工作室 ,作者镜相作者 商业世界的风向与人 违法与否,只在一句提示词之间。 文 | 黄依婷 编辑 | 卢枕 来源| 镜相工作室(ID:shangyejingxiang) 封面来源 | unsplash 在很多人的印象里,造假是少数人的事。它需要专业的技术和硬件设备,需要行业人脉积累,造假者往往隐匿在灰黑产缝隙,是神秘的、恐怖的、没有道德 底线的,与大众保持着既远又近的距离。 但今年开始,一群人感受到了明显的变化。 11月,毛绒玩偶商家于瑾第一次遭遇AI假图仅退款。到货一周后,买家发来一张图申请仅退款,在店铺客服以人为损坏为由驳回后,买家申请平台介入并成 功拿回了50元。 但那张申请仅退款的图有一个不合常理的地方——玩偶柔软的裙边上,有类似陶瓷制品的坚硬裂痕。于瑾认定这是AI假图,发到社交平台吐槽。后来,在媒 体的介入下,平台自掏腰包,把这50元返还给了于瑾,用AI假图申请仅退款的买家却销声匿迹。 于瑾意识到,AI的普及让"羊毛党"的恶意退款成本更低了。一台手机、一句提示词就能生成一张以假乱真的图片,薅走一个价值一百多元的玩偶,放在二手 平台倒卖。"不用付出什么,也不会得到什么 ...
当AI无限拉低造假门槛,普通人能做什么?
Xin Lang Cai Jing· 2025-12-19 08:11
11月,毛绒玩偶商家于瑾第一次遭遇AI假图仅退款。到货一周后,买家发来一张图申请仅退款,在店铺客服以人为损坏为由驳回后,买家申请平台介入并 成功拿回了50元。 但那张申请仅退款的图有一个不合常理的地方——玩偶柔软的裙边上,有类似陶瓷制品的坚硬裂痕。于瑾认定这是AI假图,发到社交平台吐槽。后来,在 媒体的介入下,平台自掏腰包,把这50元返还给了于瑾,用AI假图申请仅退款的买家却销声匿迹。 文 | 镜相工作室 黄依婷 编辑 | 卢枕 在很多人的印象里,造假是少数人的事。它需要专业的技术和硬件设备,需要行业人脉积累,造假者往往隐匿在灰黑产缝隙,是神秘的、恐怖的、没有道德 底线的,与大众保持着既远又近的距离。 但今年开始,一群人感受到了明显的变化。 于瑾意识到,AI的普及让"羊毛党"的恶意退款成本更低了。一台手机、一句提示词就能生成一张以假乱真的图片,薅走一个价值一百多元的玩偶,放在二手 平台倒卖。"不用付出什么,也不会得到什么惩罚。" 今年更早时候,公关齐云也经历过类似的冲击。他服务的一家上市公司被AI生成的黑稿攻击,文中细节让他直言"离谱得很"。但受制于内容平台规则,他和 同事们不得不花整整两天时间,对文中细节一 ...
假图骗取电商退款,洗脑驯化大模型,南都报告揭秘AI灰产
Nan Fang Du Shi Bao· 2025-12-18 10:35
Core Insights - The rise of generative AI has led to an increase in AI-related fraud and misinformation, particularly in the e-commerce sector, highlighting the challenges of distinguishing truth from falsehood in a technologically advanced society [2][4] - A report released at the eighth Woodpecker Data Governance Forum reviews 118 cases of generative AI risks, focusing on the societal trust challenges and ethical dilemmas posed by human-AI interactions [4][5] Group 1: Impact on Society and Individuals - Generative AI has significantly altered the landscape of information production and dissemination, leading to an exponential increase in fake content across personal, industry, and societal levels [5] - AI-generated misinformation has resulted in various forms of fraud, including "AI yellow rumors" and scams targeting vulnerable populations, particularly the elderly [5][6] - The report highlights a case where a PhD student at the University of Hong Kong cited 24 AI-generated fake references in a paper, leading to its retraction and an investigation [6] Group 2: Legal and Ethical Concerns - Instances of lawyers using AI to generate fictitious legal cases have emerged, raising concerns about the integrity of legal proceedings [6] - The report discusses the emergence of a gray industry exploiting generative AI, manipulating data to influence AI model outputs, which can mislead users into believing the information is factual [7] - The ethical implications of AI's "flattering" algorithms are examined, particularly in the context of human-AI relationships and the potential for emotional manipulation [8] Group 3: Regulatory Responses and Recommendations - The report emphasizes the need for global consensus and institutional rules to address the challenges posed by AI-generated misinformation, advocating for stronger platform regulation and cross-border collaboration [7] - Recent lawsuits against AI platforms like Character.AI and OpenAI highlight the legal accountability issues surrounding AI interactions, particularly concerning youth safety [9][10] - Various countries are implementing regulations to protect minors from AI-induced harm, with recommendations for AI products to prioritize user mental health and transparency in design [11]
AI造假“死螃蟹”诈骗商家195元退款,“顾客”被行拘8日,细节曝光
Mei Ri Jing Ji Xin Wen· 2025-12-06 05:45
Group 1 - AI technology is being misused to create fake images and videos for fraudulent claims against online merchants [1][15] - A case involving a crab dealer in Suzhou highlights the issue, where a buyer claimed that six out of eight crabs were dead, supported by suspicious video evidence [2][10] - The dealer identified inconsistencies in the buyer's claims and reported the incident to the police, leading to the arrest of the fraudster [13] Group 2 - The rise of AI-generated content poses challenges for online merchants, as it becomes easier for consumers to fabricate claims with low costs and high difficulty in detection [14] - Legal experts indicate that using AI to create false claims for refunds constitutes fraud, which can lead to legal repercussions for the offenders [19] - There is a call for online platforms to implement stricter verification processes to help merchants identify AI-generated content and protect their interests [14]
AI造假“死螃蟹”诈骗商家195元退款,“顾客”被行拘8日,案件细节曝光:死蟹公母数明显不对,甚至出现5只小腿的蟹
Mei Ri Jing Ji Xin Wen· 2025-12-06 04:51
Core Viewpoint - The rise of AI technology has led to its misuse in creating fake images and videos for fraudulent claims against online merchants, highlighting the need for better regulatory measures on e-commerce platforms [1][15]. Group 1: Incident Overview - A crab merchant in Suzhou, Jiangsu, faced a fraudulent claim when a buyer reported that six out of eight crabs were dead shortly after delivery [2][10]. - The buyer provided a video that raised suspicions due to unusual characteristics, leading the merchant to request further evidence [6][8]. Group 2: Evidence and Investigation - The merchant discovered inconsistencies in the buyer's claims, such as discrepancies in the number of dead crabs shown in the images and videos [10][11]. - After further investigation, it was revealed that the buyer had used AI to create a fake video to support their claim, resulting in a police investigation [14][20]. Group 3: Broader Implications - The incident reflects a growing trend where AI is being used as a tool for fraud in e-commerce, with other merchants also reporting similar experiences of fake claims [16][18]. - Legal experts indicate that using AI-generated images to falsely claim product defects constitutes fraud, which could lead to legal repercussions for the offenders [20].
网购退货又现“AI造假+调包”?记者实测:瑕疵图、对应视频AI一键生成,真伪难辨
Yang Zi Wan Bao Wang· 2025-12-04 15:14
Core Viewpoint - The rise of AI-generated fake damage images and videos is leading to an increase in fraudulent refund requests in the e-commerce sector, causing significant operational challenges for merchants [1][12]. Group 1: Incident Overview - A merchant reported a case where a customer requested a refund for a suitcase, claiming it was damaged, but the images provided were suspected to be AI-generated [2][3]. - The merchant discovered that the returned item was a low-quality substitute, indicating a potential swap or fraud [3][5]. Group 2: AI Technology and Fraud - The accessibility of AI tools has made it easy for consumers to create realistic images and videos of product defects, which they use to falsely claim refunds [7][9]. - The fraudulent activities have spread across various product categories, including clothing, cosmetics, and fresh produce, with consumers manipulating images to appear damaged [7][12]. Group 3: Legal Implications - Legal experts suggest that using AI to fabricate damage claims for refunds could constitute fraud, potentially leading to administrative penalties or criminal charges [12][14]. - Specific laws, such as the Administrative Penalty Law and Criminal Law, outline the consequences for such fraudulent activities, including fines and imprisonment for significant offenses [12][14]. Group 4: Merchant Response and Recommendations - Merchants are advised to document evidence meticulously, including original product images, shipping records, and communication with customers, to support their claims against fraudulent refunds [13][14]. - E-commerce platforms are urged to enhance their refund verification processes to prevent automatic refunds based on potentially fraudulent claims [14].
AI带货视频“批量化”生产 “AI李鬼”游走在灰色地带
Zhong Guo Qing Nian Bao· 2025-11-24 23:55
Core Viewpoint - The rise of AI-generated marketing videos has led to concerns about authenticity and consumer trust, as many of these videos blur the line between reality and fabrication, posing risks to consumer rights and safety [1][3]. Group 1: AI Technology in Marketing - The use of AI technology for mass-producing marketing videos is becoming increasingly common in e-commerce, with tutorials available online for creating eye-catching content [2][3]. - Current AI video generation models struggle with accurately depicting complex physical interactions, leading to issues such as "穿模" (body penetration), which highlights the limitations of AI in understanding real-world physics [2]. Group 2: Misuse of AI and Consumer Protection - There have been instances of individuals and brands being impersonated in AI-generated content, misleading consumers and infringing on their rights [4][5]. - Regulatory bodies are taking action against companies that misuse AI for false advertising, as seen in a case where a company was penalized for promoting a product using a fabricated video of a well-known media personality [5][6]. Group 3: Regulatory Responses and Industry Standards - Authorities are advocating for stronger regulations and collaborative efforts to address the challenges posed by AI in advertising, emphasizing the need for improved identification and management of AI-generated content [6][7]. - Platforms are evolving from manual reviews to AI-assisted identification of violations, enhancing their ability to detect and manage misleading content [7]. Group 4: Consumer Awareness and Reporting - Consumers are encouraged to report suspected AI-related false advertising through official channels, highlighting the importance of vigilance in maintaining market integrity [8].
生成式AI不能沦为造假工具
Jing Ji Ri Bao· 2025-11-20 22:16
Core Viewpoint - The recent incident involving an actor facing "AI impersonation" has sparked renewed public discussion about the implications of artificial intelligence, particularly in the context of content generation and potential misuse [1][2]. Group 1: AI Misuse and Public Concerns - The rapid development of generative AI has made video production accessible without specialized skills, leading to misuse such as fake buyer reviews and fraudulent content targeting vulnerable populations [1]. - The incident serves as a warning about the dangers of AI being used as a tool for deception rather than creativity and efficiency [1]. Group 2: Regulatory Measures - The "Artificial Intelligence Generated Synthetic Content Identification Measures," effective from September, mandates explicit and implicit labeling of AI-generated content to help users identify misleading information [1][2]. - Despite the implementation of these measures, some AI content remains unmarked, misleading audiences and necessitating a more robust governance framework [2]. Group 3: Recommendations for Governance - A multi-layered governance system is essential to combat AI-related fraud, including clearer legal standards for penalties, defined responsibilities among service providers, platforms, and users, and enhanced regulatory efforts [2]. - Upgrading technical capabilities for high-precision detection of fraudulent content is crucial for effective identification and mitigation of AI-generated deception [2].