Deepfake
Search documents
GEN Boosts Cyber Safety With Norton Deepfake Protection on Intel PCs
ZACKS· 2025-10-01 15:41
Core Insights - Gen Digital (GEN) has introduced a new feature in Norton 360 to protect users from deepfake scams, collaborating with Intel to enable real-time protection on Intel Core Ultra processors [1][4] - The feature enhances existing scam protection by checking both video and audio for signs of fake content and is currently available in the US, UK, Australia, and New Zealand [2][3] Product Development - Continuous updates to Norton 360 reflect the company's commitment to addressing emerging AI scams, potentially attracting new customers and retaining existing ones [3][4] - The new feature is designed to provide users with greater confidence while browsing or consuming content online, as AI scams become increasingly sophisticated [4] Partnerships and Market Expansion - Gen Digital is expanding its deepfake protection beyond Intel to include Qualcomm and AMD, ensuring Norton protection is available across major AI PC platforms [5][7] - The scam detection features are already operational on Windows AI PCs powered by Qualcomm Snapdragon X chips, allowing real-time alerts without cloud dependency [6] - Support for AMD is expected later this year, further broadening the coverage for AI PC users [7][8] Strategic Positioning - This multi-partner approach positions Norton as a standard security solution for users seeking reliable on-device protection against scams and deepfakes, regardless of the processor brand [8] - The partnerships with Qualcomm and AMD enhance market reach and flexibility for customers, contributing to long-term growth opportunities in cyber safety and financial wellness [8]
X @BBC News (World)
BBC News (World)· 2025-09-23 22:55
Industry Trend - Bollywood stars are fighting for personality rights amid a surge in deepfakes [1]
X @Bloomberg
Bloomberg· 2025-09-14 15:14
Cybersecurity Threat - A suspected North Korean state-sponsored hacking group utilized ChatGPT to generate a deepfake of a military ID document [1] - The deepfake was employed in an attack targeting a South Korean entity [1]
X @The Wall Street Journal
The Wall Street Journal· 2025-09-13 12:33
Technology & Security - AI 技术可以克隆任何人的声音 [1] - 安全公司制作了深度伪造版本的新闻报道 [1] Potential Risks - 深度伪造技术可能被用于传播虚假信息 [1]
We asked experts about this glitch in Trump's memorial video for Charlie Kirk
NBC News· 2025-09-11 22:56
Authenticity Assessment - Experts suggest the video of Trump does not show evidence of being synthetically generated or a deep fake [2] - Experts indicate the audio in the video is authentic [2] - Analysis suggests the video is likely a result of multiple videos being stitched together [2] Source Verification - A White House spokesperson denies the use of AI in the video [3] - The White House spokesperson condemns the sharing of conspiracy theories related to the video [3] Social Media Context - Online posts suggested the video of Trump might be AI-generated, with some posts receiving millions of views [1] - Trump's team frequently uses deep fakes in social media for memeing and messaging [3]
Deepfake风险加剧,金融壹账通智能视觉反欺诈产品服务香港头部银行
Zhong Jin Zai Xian· 2025-08-26 05:29
Core Viewpoint - The emergence of deepfake technology as a financial risk tool has led to a covert and intense "AI arms race" within the global banking system, prompting Chinese fintech firms to leverage AI technology to enhance financial security defenses [1][4]. Group 1: Deepfake Technology and Financial Risks - The proliferation of deepfake technology has become a global financial concern, with AI-based identity fraud cases increasing over 30 times in 2023, making the banking system a primary target for attacks [4]. - In Hong Kong, the demand for accurate identity verification has intensified due to a high volume of cross-border transactions and remote account openings, making deepfake defense a focal point for the industry [4]. Group 2: Financial One Account's AI Solutions - Financial One Account has successfully implemented its intelligent visual anti-fraud product in Hong Kong, signing contracts for projects involving electronic identity verification (eKYC) and deepfake detection technology, with a total project value in the billion range [3][5]. - The intelligent visual anti-fraud system includes features such as facial recognition, multi-national document NFC recognition, deepfake detection, and device risk assessment, boasting seven core identification capabilities [5]. Group 3: Market Adaptation and Compliance - The intelligent visual anti-fraud product is highly compatible with Hong Kong's market requirements for data security and regulatory compliance, addressing cross-border data, secure operations, and identity verification needs [6]. - The successful project in Hong Kong not only strengthens Financial One Account's technological barriers in the high-end market but also serves as a model for Chinese fintech companies in international financial governance [7]. Group 4: Future Expansion Plans - Financial One Account plans to expand its applications in Southeast Asia, leveraging its core capabilities in big data, artificial intelligence, and risk control to enhance compliance and efficiency in international markets [7].
用AI图「仅退款」,这群人真穷疯了?
36氪· 2025-08-01 00:17
Core Viewpoint - The article discusses the increasing prevalence of AI-generated images and their implications for e-commerce, particularly in the context of refund fraud, raising concerns about the balance between technology and regulations [3][4][6]. Group 1: AI and E-commerce - Some consumers are exploiting AI-generated images to falsely claim product defects and obtain refunds, leading to significant challenges for e-commerce platforms and sellers [4][10]. - The ease of creating realistic AI images has lowered the barrier for fraudulent activities, transforming the refund process into a potential avenue for scams [17][19]. - Instances of refund fraud using AI-generated evidence have been reported, with sellers sharing their experiences of encountering manipulated images and videos [10][12]. Group 2: Technology and Regulation - The article highlights the blurred lines between technology and regulations as AI-generated content becomes more sophisticated, prompting discussions on how to establish effective guidelines [6][19]. - The emergence of AI tools has simplified the process of creating deceptive content, raising concerns about the potential for widespread abuse in online marketplaces [17][19]. - There is a call for comprehensive industry standards to regulate AI technology and prevent its misuse, ensuring that it serves humanity positively [43]. Group 3: Challenges in Identifying AI Content - The article emphasizes the difficulty in distinguishing AI-generated images from real ones, even for experienced individuals, due to the advanced capabilities of AI [31][36]. - Suggested methods for identifying AI-generated content include tracing the image source and observing subtle imperfections in the images [38][39]. - The need for vigilance and improved detection methods is crucial as AI technology continues to evolve and integrate into various sectors [37][41].
X @BBC News (World)
BBC News (World)· 2025-07-23 02:19
Deepfake Concerns - The report highlights the theft of an Indian woman's identity for use in erotic AI-generated content, raising concerns about deepfake deception [1] - The incident underscores the potential for misuse of AI technology to create harmful and non-consensual content [1]
How Fake Job Seekers Are Stealing Remote Jobs
CNBC· 2025-07-11 16:00
Key Findings - Deepfake technology is enabling fake job candidates to infiltrate the hiring process, posing a significant threat to organizations [3][4] - Gartner predicts that by 2028, 25% (1 in 4) of job candidates worldwide will be fake [3] - Vidoc Security Lab found that 16.8% (approximately 1 in 6) of job applicants are fake [6] - Resume Genius survey indicates that approximately 17% of U S hiring managers have encountered deepfake technology during video interviews [10] Impact and Risks - Deepfake scams have already cost companies millions of dollars worldwide, and the threat is growing [12] - AI-generated fraud, including deepfakes, could cost the U S financial sector up to $40 billion by 2027, up from $12 3 billion in 2023 [12] - Fake candidates can gain access to sensitive data, steal data, write malicious code, and leave the door open for other types of fraud [11][12] - Hiring fake candidates from sanctioned nations poses a national security concern, as salaries can fund illicit activities [20] Contributing Factors - The rise of remote working, accelerated by the pandemic, has contributed to the increase in deepfake job seekers [7][8] - Virtual interviews, while offering convenience and cost savings, have opened the door to new risks [9][10] Countermeasures and Concerns - Companies may need to adjust their hiring processes, potentially switching to offline interviews to combat the rise of fake candidates [22] - Concerns exist that the focus on avoiding fake candidates could lead to biases in hiring, favoring local candidates and in-person interviews [23] - The increased scrutiny and longer hiring processes may negatively impact genuine candidates [23][24]