Workflow
深度伪造
icon
Search documents
人民网发布《中国内容科技十大进展(2021-2025)》
Ren Min Wang· 2025-12-25 13:16
Core Viewpoint - The report "Top Ten Advances in China's Content Technology (2021-2025)" highlights significant breakthroughs in the content technology sector during the 14th Five-Year Plan period, showcasing a clear development trajectory from "tool empowerment" to "ecological reconstruction" and emphasizing the intelligent, vertical, and systematic development trends in the industry [1]. Group 1: Major Advances - The rise of domestic general models and their integration with media applications, with models like "Wenxin Yiyan" and "Tongyi Qianwen" being launched, enhancing media integration and content production capabilities [2]. - AIGC (AI-Generated Content) is driving a paradigm shift in content production, with tools like Jianying AI and Tencent Zhiying improving automation and personalization in content creation [3]. - The establishment of national key laboratories and research support since 2019 has strengthened the innovation and collaborative development in content technology [4]. Group 2: Infrastructure and Data Support - The construction of national-level corpora, such as the "Mainstream Value Corpus," is enhancing the training data quality for large models, supporting their sustainable development [5]. - Content review processes are evolving from "human-centric" to "human-machine collaboration," utilizing AI for efficient content verification and safety detection [6]. Group 3: Cultural Heritage and Cross-Industry Applications - Intelligent restoration and digital activation technologies are aiding in the preservation and innovation of cultural heritage, with advancements in image restoration and 3D reconstruction [7]. - The integration of XR, metaverse, and digital twin technologies is enhancing cross-industry content applications, creating immersive experiences in various sectors [8]. Group 4: Digital Assets and Security Challenges - The rise of digital collectibles is being explored by mainstream media, integrating with XR and metaverse scenarios to innovate user interaction and cultural dissemination [9]. - The industry is actively addressing new security challenges posed by deep fakes, with initiatives like "AI anti-counterfeiting" technology being developed to enhance content safety [10]. Group 5: Regulation and Future Outlook - A series of regulations have been established since 2021 to ensure the balanced development and application of generative AI technologies, promoting a collaborative governance framework [11]. - The report concludes that these advancements signify a new phase in content technology development, emphasizing the importance of media integration, digital culture prosperity, and governance efficiency in shaping the future landscape [11].
日本近半数涉未成年人深度伪造色情内容由同学所为
Xin Hua She· 2025-12-18 04:41
深度伪造指用人工智能(AI)技术制作看起来真实度很高的视频、图片或音频片段,这种技术可将色情影 像中的人物改变成另外一人的样貌。这类犯罪事件大多针对女性和未成年人。 日本警方17日说,警方今年前9个月掌握涉及18岁以下未成年人的深度伪造色情内容案件中,约半数出 自受害者同学之手。 日本共同社18日援引日本警察厅数据报道,日本警方今年前9个月处理的79起此类案件中,逾八成受害 者为初中生或高中生,其中约半数案件的作案人与受害者在同一所学校就读。用于生成深度伪造色情内 容的照片,通常取自毕业纪念册。 这是日本警察厅首次就此类案件发布详细数据。上述案件受害者中,初中生占比约52%,高中生占比约 32%,小学生占比约5%。约53%案件中的深度伪造图像由受害者的同学制作或传播;约6%的案件作案 人是受害者通过社交媒体结识的人,另有6%案件由受害者所在学校教职员工或其他学校的学生实施。 在日本,未经当事人同意制作并传播其色情图像构成侵权,涉案者还可能面临诉讼。上述79起案件中, 日本警方对4起以涉嫌诽谤罪等为由展开刑事调查,另有6起对实施者给予行为矫正指导。 日本警察厅说,上述案件中,约18%涉及生成式AI制作的照片。虽 ...
AI与人|“AI垃圾”泛滥,最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 05:26
Core Viewpoint - The rise of "AI Slop" content, characterized by low-quality, repetitive, and meaningless material generated by AI tools, is increasingly prevalent on the internet, particularly on social media platforms [1][2][4]. Group 1: Definition and Characteristics of "AI Slop" - "AI Slop" refers to low-quality content produced by AI tools, including text, images, and videos, often found on social media and content farms [2][3]. - The term "Slop" originally described cheap and low-nutrition items, and its modern usage highlights the poor quality of AI-generated content [2]. - Unlike "deepfakes" or "AI hallucinations," which have specific deceptive intents or technical errors, "AI Slop" is produced without regard for accuracy or logic, leading to a flood of meaningless content [3]. Group 2: Causes of Proliferation - The proliferation of "AI Slop" is driven by the increasing power and low cost of AI technology, enabling rapid content generation that prioritizes clicks and ad revenue over quality [4]. - New AI tools like ChatGPT, Gemini, and Sora allow for quick production of readable text, images, and videos, leading to the rise of content farms that prioritize quantity over quality [4]. - Algorithms on social media platforms often favor engagement metrics over content quality, further encouraging the spread of "AI Slop" [4]. Group 3: Consequences of "AI Slop" - The overwhelming presence of "AI Slop" can obscure credible sources in search results, blurring the line between truth and fiction [5][6]. - As misinformation spreads more rapidly in an environment where distinguishing fact from fiction becomes challenging, the trust crisis in information sources intensifies [6]. Group 4: Potential Solutions - Some companies, like Spotify, are beginning to label AI-generated content and adjust algorithms to reduce the visibility of low-quality material [7]. - The C2PA (Coalition for Content Provenance and Authenticity) standard aims to embed metadata in digital files to trace their origins, helping to differentiate between human-created and AI-generated content [7]. - The most effective defense against "AI Slop" lies in individual responsibility, encouraging users to verify sources and support genuine creators [7][8].
“AI垃圾”泛滥,最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 02:20
Core Viewpoint - The rise of "AI Slop" content, characterized by low-quality, repetitive, and meaningless information generated by AI tools, is increasingly prevalent on the internet, particularly on social media platforms [1][2][4]. Group 1: Definition and Characteristics of AI Slop - "AI Slop" refers to low-quality content produced by AI tools, including text, images, and videos, often found on social media and automated content farms [2][3]. - The term "Slop" originally described cheap and low-nutrition items, and its modern usage highlights the poor quality of AI-generated content that clutters online spaces [2][3]. - AI Slop differs from "deepfakes" and "AI hallucinations" in that it is not necessarily intended to deceive but results from careless content production without verification [3]. Group 2: Causes of AI Slop Proliferation - The proliferation of AI Slop is driven by the increasing power and low cost of AI technology, enabling rapid content generation that prioritizes clicks and ad revenue over quality [4][5]. - Tools like ChatGPT, Gemini, and Sora allow for quick production of readable content, leading to the rise of content farms that prioritize quantity over quality [4]. - Algorithms on social media platforms often favor engagement metrics over content quality, further exacerbating the issue of AI Slop [4][5]. Group 3: Consequences of AI Slop - The overwhelming presence of AI Slop can lead to a decline in the visibility of credible sources, blurring the lines between truth and fiction [5][6]. - This trust crisis can have tangible effects, as misinformation spreads more rapidly when users cannot discern credible information from AI-generated content [5][6]. Group 4: Potential Solutions and Industry Responses - Some companies, like Spotify, are beginning to label AI-generated content and adjust algorithms to reduce the recommendation of low-quality material [6]. - The C2PA (Coalition for Content Provenance and Authenticity) standard aims to embed metadata in digital files to trace their origins, helping to distinguish between human-created and AI-generated content [6]. - The most effective defense against AI Slop lies in individual user behavior, encouraging users to verify sources and support genuine creators [6][7].
“AI垃圾”泛滥 最后的防线在人类自身
Ke Ji Ri Bao· 2025-12-16 00:23
Core Viewpoint - The rise of "AI Slop" content, characterized by low-quality, repetitive, and meaningless information generated by AI tools, is increasingly prevalent on the internet, particularly on social media platforms [1][2][4]. Group 1: Definition and Characteristics of "AI Slop" - "AI Slop" refers to low-quality content produced by AI tools, including text, images, and videos, often found on social media and automated content farms [2][3]. - The term "Slop" originally described cheap and low-nutrition items, and its modern usage highlights the poor quality of AI-generated content that clutters information channels [2][3]. - Unlike "deepfakes" or "AI hallucinations," which have specific deceptive intents or technical errors, "AI Slop" is produced without regard for accuracy or logic, leading to a proliferation of meaningless content [3]. Group 2: Causes of Proliferation - The widespread creation of "AI Slop" is driven by the increasing power and low cost of AI technology, allowing users to generate content quickly for clicks and ad revenue [4]. - Tools like ChatGPT, Gemini, and Sora enable rapid content generation, leading to the emergence of content farms that prioritize quantity over quality [4]. - Algorithms on social media platforms often favor engagement metrics over content quality, further incentivizing the production of "AI Slop" [4]. Group 3: Consequences of "AI Slop" - The overwhelming presence of "AI Slop" can obscure the line between credible and false information, leading to a trust crisis where misinformation spreads rapidly [5][6]. - As "AI Slop" proliferates, it diminishes the visibility of trustworthy sources in search results, complicating users' ability to discern fact from fiction [5][6]. Group 4: Potential Solutions - Some companies, like Spotify, are beginning to label AI-generated content and adjust algorithms to reduce the recommendation of low-quality material [8]. - The C2PA (Content Authenticity Initiative) aims to embed metadata in digital files to trace their origins, helping users identify whether content is human-created or AI-generated [8]. - The most effective defense against "AI Slop" lies in individual user behavior, encouraging people to verify sources and support genuine creators [8].
山西省消协发布警示:当心“AI换脸”“AI配音”新型诈骗
Zhong Guo Xin Wen Wang· 2025-12-12 03:08
Core Viewpoint - The Shanxi Consumer Association has issued a warning about new types of scams utilizing "AI face-swapping" and "AI voice synthesis" technologies, collectively referred to as "deep forgery," which pose significant threats to consumer financial security and personal privacy as AI technology becomes more prevalent by 2025 [1]. Group 1: Scam Methods - Scammers collect clear facial videos and voice clips of consumers or their acquaintances from social media platforms like Douyin, WeChat, and Weibo for AI model training [1]. - They create realistic fake videos or audio using deep forgery technology and design urgent scenarios, such as claiming an accident or detention, to lower the victim's guard [1]. - Scammers then contact victims through video calls or send forged audio and video clips, requesting money transfers to specified accounts [1]. Group 2: Consumer Protection Measures - The Shanxi Consumer Association emphasizes the importance of establishing a "multi-verification" awareness and not solely relying on what is seen or heard [2]. - Consumers are advised to adhere to the "three no's and two musts" prevention principles: do not trust, do not transfer money, and do not disclose personal information [2]. - It is crucial to verify any money transfer requests made through non-face-to-face methods, even if they appear to come from familiar voices or faces, and to be cautious of urgent situations that hinder verification [2]. Group 3: Verification and Reporting - Consumers should verify requests for money from "friends or family" by hanging up and calling back using stored contact information or confirming through mutual acquaintances [3]. - Observing for subtle signs of AI-generated content, such as unnatural facial expressions or audio discrepancies, can help identify scams [3]. - In case of suspected fraud, consumers are urged to report to the police immediately and provide evidence such as account details, contact information, chat records, and transfer receipts [3].
深度伪造正在重塑商业安全边界:企业生存指南已就位
3 6 Ke· 2025-11-30 23:11
Core Insights - Deepfake technology has evolved from a conceptual threat to a tangible business risk, as evidenced by incidents like the AI-generated image of an explosion at the Pentagon that caused a significant drop in the S&P 500 index, erasing billions in market value [1] - The deepfake market is projected to grow from $75 billion in 2023 to $385 billion by 2032, highlighting the urgent need for businesses to upgrade their defense systems against misinformation [1] Group 1: Threats of Deepfake Technology - Deepfake attacks pose a direct threat to corporate assets, as demonstrated by the case of engineering giant Arup, which lost 200 million HKD due to a sophisticated scam involving AI-generated executive voices [2] - The advertising giant WPP also faced a similar attack, where scammers attempted to replicate the CEO's voice and appearance to deceive employees into transferring funds [2] Group 2: Operational Challenges - Companies that rely on facial recognition technology will need to replace their security systems by 2026, as existing solutions are ineffective against deepfake threats [3] - The addition of verification measures, such as digital watermarks and live detection, increases operational costs and complicates decision-making processes [3] Group 3: Trust Crisis - The rise of deepfake technology is leading to a "trust tax," which includes both direct security investments and indirect costs arising from widespread skepticism in digital communications [4] - The erosion of trust in digital interactions complicates business collaborations, as every communication may require additional verification [4] Group 4: Strategic Solutions - Companies should focus on establishing credible verification methods, such as digital signatures and watermarks on sensitive documents, to mitigate the risks posed by deepfakes [5][6] - Creating a public verification center can provide stakeholders with authoritative information and enhance trust in corporate communications [7] - Training employees to recognize deepfakes and incorporating simulation exercises into onboarding processes can improve organizational resilience [8] - Investing in real-time media tampering detection systems is essential for embedding verification capabilities into core business processes [9] Group 5: Navigating the Crisis - The rise of AI-generated content is accelerating the collapse of shared realities, making it crucial for companies to anchor their communications in verifiable facts [10] - Organizations that prioritize building trustworthy systems will not only withstand the challenges posed by deepfakes but also emerge as leaders in their industries during chaotic times [10]
韩国网络性犯罪数量激增 近半嫌疑人为青少年
Yang Shi Xin Wen· 2025-11-16 18:03
Core Insights - The South Korean National Police Agency reported a significant increase in arrests related to online sexual crimes, with over 3,000 suspects apprehended in the past year, marking a 47.8% increase compared to the previous year [1] Summary by Categories Crime Statistics - From November 2023 to October 2024, the police solved 3,411 cases of online sexual crimes and arrested 3,557 suspects, with 221 formally detained [1] - Cases involving deepfake technology accounted for the highest proportion at 35.2%, followed by child or adolescent pornography at 34.3%, and illegal filming at 19.4% [1] Demographics of Suspects - The majority of suspects, totaling 1,761, were teenagers aged 10 to 19 [1] - The increase in the number of apprehended suspects is attributed to both the rise in deepfake-related cases and intensified law enforcement efforts [1] Public Awareness and Education - A significant number of South Korean students lack awareness of the dangers associated with deepfake technology in sexual crimes, with 62.2% of middle school students and 47.7% of high school students considering deepfakes as mere "pranks" [1]
FT中文网精选:AI假发票是个大问题
日经中文网· 2025-11-13 02:46
Core Viewpoint - The emergence of AI-generated fake invoices poses a significant threat to financial trust, as these sophisticated tools make it easier for individuals to commit fraud [5][6]. Group 1 - Traditional methods of fraud involved basic tools like photocopiers and correction fluid, but advancements in technology have led to more sophisticated techniques [6]. - AI-generated fake invoices are now equipped with realistic trademarks, addresses, and details, even simulating wear and tear like creases and coffee stains [6]. - The potential for "deepfakes"—manipulated videos or audio that can make public figures appear to say things they never said—raises concerns about their use in political and financial fraud [6].
巴菲特罕见发声→
新华网财经· 2025-11-08 04:11
Core Viewpoint - Berkshire Hathaway, led by Warren Buffett, issued a statement clarifying that several videos circulating on YouTube, which falsely depict Buffett's comments, are fraudulent and created using artificial intelligence [1][3]. Group 1 - On June 6, Berkshire Hathaway announced that Buffett noticed several videos on YouTube featuring comments attributed to him, which were generated using AI and included fake images [3]. - The videos may resemble Buffett but have a monotonous voice that is clearly not his, raising concerns that unfamiliar viewers might be misled by this fraudulent content [3]. - Since the Berkshire shareholder meeting in May, Buffett has made few public comments, increasing the potential for misinformation to spread [3]. Group 2 - The rapid spread of "deepfake" content, including fake images, audio, and videos, is becoming a significant issue, being used for harassment, financial scams, and even election interference [5]. - Analysts highlight the challenge of preventing and stopping the misleading effects of deepfake content, which poses a dilemma for governments and tech giants worldwide [5]. - Currently, there are no federal regulations in the U.S. aimed at controlling the risks associated with artificial intelligence, although California has recently signed a law to regulate AI chatbots, requiring operators to implement key protective measures [5].