AI滥用

Search documents
2025年上半年互联网黑灰产趋势年度总结报告-威胁猎人
Sou Hu Cai Jing· 2025-09-01 10:49
Group 1 - The report by Threat Hunter analyzes the trends of the internet black and gray industry in the first half of 2025, focusing on attack resources, technologies, and scenarios to provide reference for enterprise risk control [1][10] - Daily active risk IPs reached 13.82 million, a 15.02% increase compared to the previous period, with over 50% of attacks coming from "hijacked shared proxy" IPs [9][62] - The emergence of new "link code" methods for money laundering and a 28.6% increase in laundering bank cards were noted, with gambling-related cards accounting for 70.25% of the total [1][9] Group 2 - AI technology is being heavily misused, enabling minute-level face-swapping and 10-second voice cloning for fraud and authentication bypass [1][9] - Marketing fraud intelligence increased to 580 million entries, a 26% rise, with high risks in e-commerce and local life sectors [2][9] - Financial fraud incidents included 770,000 malicious loan-related entries, a 12% increase, while car loan fraud decreased by 10% and housing loan fraud surged by 63% [2][9] Group 3 - The report highlights the adaptability of the black industry, indicating that after being targeted, it quickly shifts to existing or emerging alternative channels [8][10] - API attacks exceeded 1.49 million times, with consumer finance being the primary target, utilizing methods such as account scanning and database collisions [2][9] - Data breaches reached 57,000 incidents, with e-commerce and finance being the focal points, and loan application information leaks increasing fourfold [2][9]
“耳听眼见”也未必为实!AI换脸、声音克隆等技术滥用 到底怎么治?
Yang Shi Xin Wen· 2025-08-25 01:24
Group 1 - The misuse of AI technologies, such as voice cloning and deepfake, is increasingly prevalent, raising concerns about trust and the need for regulatory measures [1][2][3] - AI-generated content has become sophisticated enough that distinguishing between real and fake is challenging, impacting the livelihoods of voice actors and public figures [3][5] - The rapid development of AI tools has lowered the barriers for creating synthetic content, leading to widespread misuse and the proliferation of misleading information [5][6] Group 2 - Regulatory bodies are struggling to keep pace with the rapid advancements in AI technology, prompting initiatives like the "Clear and Clean" campaign to address AI misuse [8][10] - New regulations, such as the "Artificial Intelligence Generated Content Identification Measures," will require explicit labeling of AI-generated content, aiming to mitigate misuse [10][11] - The implementation of these regulations is seen as a crucial step in establishing a legal framework to manage the risks associated with AI technologies [12][13]
“仅退款”新骗术:AI造假成“薅羊毛”利器
Xin Hua Wang· 2025-08-15 06:36
Core Viewpoint - The misuse of AI tools by consumers to generate fake product defect images for refunds poses significant challenges to e-commerce platforms and merchants, potentially undermining fair trade practices and increasing operational costs [4][5]. Group 1: Impact on E-commerce - The phenomenon of consumers using AI-generated images to claim refunds is not isolated, with many merchants reporting similar experiences, including instances of altered images that are difficult to distinguish from real defects [3]. - This behavior is perceived as taking advantage of refund policies, which could lead to a collective trend among consumers, ultimately harming the integrity of the e-commerce ecosystem [4]. - Merchants face increased costs in terms of verification and operational management, often opting for direct refunds or compensation, which complicates the process of defending against fraudulent claims [5]. Group 2: Regulatory Response - In response to the misuse of AI, the government has introduced measures, including the "Identification Measures for AI-Generated Synthetic Content," effective from September 1, 2025, which prohibits malicious alteration or concealment of content identifiers [5]. - A nationwide initiative titled "Clear and Clean: Rectifying AI Technology Abuse" was launched to address these issues over a three-month period starting in April [5]. - E-commerce platforms are encouraged to enhance their technical capabilities and establish a robust credit management system, while also sharing fraud data across platforms to create a collaborative defense mechanism against emerging risks [5].
谢赛宁「踩雷」背后,竟藏着科研圈更黑真相:Science实锤论文造假+AI滥用
3 6 Ke· 2025-08-05 09:50
Group 1 - The core issue of scientific fraud has evolved into an industry, with a complex network involving "paper mills," publishers, journals, and intermediaries [3][6][10] - A large-scale investigation has provided concrete evidence of this phenomenon, revealing that fraudulent papers are being systematically infiltrated into global scientific journals [8][10] - The growth rate of fraudulent papers is significantly higher than the overall growth of academic publications, indicating a rising trend in scientific misconduct [26][28] Group 2 - The analysis focused on the PLOS ONE journal, identifying editors with abnormally high retraction rates, suggesting potential collusion between editors and authors [13][15] - A network of 35 individuals has been identified, responsible for over 4,000 papers across multiple publishers, indicating systemic collusion in the publication process [21] - The study highlights that the issues are not isolated to a few journals, but likely prevalent across the academic publishing landscape [18][21] Group 3 - The emergence of AI, particularly ChatGPT, has led to a significant increase in the use of AI-generated content in academic papers, with 22% of computer science papers showing signs of AI involvement [32][35] - Research indicates that the frequency of AI usage in scientific writing has surged since the introduction of ChatGPT, raising concerns about the integrity of academic work [30][44] - The potential for AI-generated content to mislead and compromise the quality of scientific research is a growing concern, especially in sensitive fields like medicine [26][28]
上海市网信办对一批拒不整改的生成式人工智能服务网站予以立案处罚
news flash· 2025-06-24 09:42
Core Viewpoint - The Shanghai Cyberspace Administration has initiated penalties against several generative artificial intelligence service websites for failing to comply with legal safety assessment requirements and for not implementing necessary safety measures to prevent the generation of illegal content [1] Group 1: Regulatory Actions - The Shanghai Cyberspace Administration discovered that certain websites providing generative AI services did not conduct safety assessments as mandated by law [1] - These websites failed to take necessary precautions to prevent the generation of illegal content, including violations of personal information rights and the production of illicit materials such as money laundering content and pornographic images [1] - Companies are urged to take down related functionalities and may only resume operations after passing safety evaluations [1] Group 2: Future Enforcement Focus - The Shanghai Cyberspace Administration will continue to combat the misuse of AI, particularly focusing on issues related to "AI disguise," "AI face-swapping and voice-changing," and "AI forgery" [1] - There will be a concentrated effort to address algorithmic services that infringe on personal information rights, with strict penalties for repeat offenders and those with serious issues [1]
Kimi上线信源质量徽章,并与财新正式达成版权合作|合规周报(第188期)
2 1 Shi Ji Jing Ji Bao Dao· 2025-04-28 03:14
Group 1: Kimi's Developments - Kimi announced a copyright collaboration with Caixin Media, becoming the first domestic company to publicly declare such a partnership with a media outlet [1] - Kimi has launched a quality source badge system to help users identify high-quality information sources, focusing on government and educational websites [1][2] - The collaboration will allow Kimi to generate answers based on Caixin's professional reporting when users inquire about financial topics [1] Group 2: Anthropic's Warning - Anthropic reported that its AI model Claude is being misused for "influence-as-a-service," with organizations creating over 100 bot accounts to spread politically biased content [3][4] - The misuse includes not only content generation but also automated interactions such as commenting and sharing, indicating a sophisticated manipulation of social media [4] Group 3: Adobe's New Application - Adobe launched a new application called Content Authenticity, allowing users to protect images from AI training by embedding invisible metadata and a "prohibit AI training" label [5] - The application is currently in public testing and supports batch processing of up to 50 images, but Adobe has not yet signed agreements with AI model creators to enforce this standard [5][6] Group 4: Weibo's AI Privacy Response - Weibo's AI feature "AI Smart Search" faced backlash for allegedly infringing on user privacy, prompting the company to clarify that it does not analyze non-public user data [7] - Following user feedback, Weibo has adjusted the functionality of the AI Smart Search to avoid presenting content that may cause discomfort to users [7] Group 5: E-commerce Platforms' Policy Change - Major e-commerce platforms, including Pinduoduo, Taobao, Douyin, Kuaishou, and JD.com, are set to eliminate the "refund only" policy, allowing merchants to handle refund requests independently [8] - These platforms have engaged in discussions with regulatory authorities regarding the cancellation of the "refund only" policy and will announce the details publicly once finalized [8]