Workflow
Deepfake
icon
Search documents
UK Government Partners with Microsoft to Combat Rising Deepfake Threats, Misinformation
Crowdfund Insider· 2026-02-08 15:49
The UK Government has taken a significant step in addressing the rising dangers of AI-generated misinformation by partnering with Microsoft (NASDAQ:MSFT) and other industry professionals to develop a deepfake detection system. On Thursday, February 5, 2026, the Home Office unveiled plans to create what it describes as a deepfake detection evaluation framework.This initiative brings together major technology companies—including Microsoft—alongside academics and specialists to establish uniform benchmarks for ...
Barclays Remains Bullish on Microsoft Corporation (MSFT)
Yahoo Finance· 2026-02-08 08:48
Microsoft Corporation (NASDAQ:MSFT) is one of the most promising future stocks to buy now. On February 6, Barclays reiterated a Buy rating on Microsoft Corporation (NASDAQ:MSFT) and set a price target of $600. Microsoft (MSFT) Stock: Truist Raises Price Target to $675, Reiterates Buy In another development, Reuters announced on February 5 that Britain will work with Microsoft Corporation (NASDAQ:MSFT), experts, and academics for the development of a system that detects deepfake material online, marking a ...
Elon Musk wants to be a trillionaire — here's how SpaceX may get him there
CNBC· 2026-02-07 13:00
Core Insights - Elon Musk's wealth is increasingly driven by SpaceX, which now constitutes nearly two-thirds of his net worth, estimated at around $845 billion, surpassing the combined wealth of the next three richest individuals [1] - SpaceX's acquisition of Musk's AI and social media company, xAI, valued the merged entity at $1.25 trillion, with Musk's stake in the company estimated at over $530 billion [2] - Musk's focus is shifting towards SpaceX, as indicated by Tesla's proxy filing, which acknowledges that a majority of Musk's wealth now comes from other ventures [3] Company Developments - SpaceX has secured over $20 billion in federal government contracts, with more lucrative contracts anticipated, and Musk envisions the acquisition as a step towards developing "orbital data centers" [4] - The merger of SpaceX and xAI may expand access to larger capital markets, particularly for xAI, which has a significant capital requirement [4] - xAI is currently under investigation by authorities in multiple regions due to concerns over its Grok image generator, which has been linked to the creation of explicit deepfake images [4] Regulatory Considerations - It remains uncertain whether the merger between SpaceX and xAI will necessitate regulatory review, as there are calls for investigations into SpaceX regarding undisclosed Chinese investors [5]
AI差点骗过全世界,这个8.7万赞的帖子被揭穿后,我开始怀疑一切了
3 6 Ke· 2026-01-11 23:41
Core Insights - A new user named Trowaway_whistleblow claims to be a software engineer at a food delivery platform and is preparing to expose alleged company misconduct [1] - The story aligns with public perceptions of "evil capitalism" but is revealed to be an AI-generated hoax [2] Group 1: Allegations Against the Company - The whistleblower's post details how the platform manipulates algorithms to harm consumers and delivery workers, such as intentionally delaying regular orders to make paid priority orders appear faster [3] - The platform allegedly charges a "regulatory response fee" to lobby against driver unions, and calculates a "Desperation Score" for drivers based on their acceptance of low-paying orders [3] - The CEO of DoorDash, Tony Xu, publicly denied the allegations, stating that if anyone promotes such a culture, they would be fired [3] Group 2: Previous Legal Issues - DoorDash has previously faced lawsuits for stealing driver tips, resulting in a settlement of $16.75 million [4] - The exploitation of gig economy workers is not a new issue, as advocacy groups like Los Deliveristas Unidos have noted that such allegations resonate with the experiences of delivery workers [4] Group 3: Investigation and Verification Challenges - A journalist, Casey Newton, attempted to verify the whistleblower's identity but noticed inconsistencies, such as a spelling error in the whistleblower's communication [5] - The whistleblower provided a photo of an employee ID, which was later identified as AI-generated, raising concerns about the authenticity of the claims [11] - The rapid advancement of AI tools has made it easier for individuals to create convincing forgeries, complicating the verification process for journalists [14][15] Group 4: Broader Implications of AI in Information Verification - The rise of deepfake technology poses significant challenges for journalists, as it increases the difficulty of distinguishing between real and fabricated information [15][16] - The phenomenon of deepfakes contributes to a growing distrust in information sources, leading to societal uncertainty [16]
Grok and X should be suspended from Apple, Google app stores, Democratic senators say
CNBC· 2026-01-09 20:39
Core Viewpoint - Three Democratic senators are urging Apple and Google to suspend the X and Grok apps due to concerns over nonconsensual explicit content and child sexual abuse imagery [2][5] Group 1: Legislative Action - Senators Ron Wyden, Ed Markey, and Ben Ray Lujan have called for the immediate removal of the X and Grok apps from app stores until Elon Musk addresses the issues of illegal activities [2] - The senators argue that inaction would undermine the tech giants' claims of providing a safer user experience [2] Group 2: Content Concerns - Grok and X have been criticized for allowing users to generate and share "deepfake" explicit content without consent, including images that denigrate individuals based on race or ethnicity [3] - A specific incident involved Grok generating an inappropriate image of a descendant of Holocaust survivors, which has drawn significant backlash [4] Group 3: Regulatory and Safety Issues - The issues surrounding Grok have led to regulatory scrutiny from various countries, although the Federal Trade Commission and Department of Justice have not yet indicated plans to investigate xAI [4] - Musk and X have stated that users generating illegal content will face consequences similar to those who upload such content directly [5] Group 4: Industry Response - Apple and Google have stringent guidelines requiring app developers to prevent the sharing of harmful content, and similar apps have faced suspension for failing to filter inappropriate material [6] - Despite recent updates to Grok's features, concerns remain as users can still generate harmful content without consent [6][7] Group 5: Financial Developments - xAI has successfully raised a $20 billion funding round from notable investors, including Nvidia and Cisco Investments, amidst the ongoing controversies [8]
YouTube's new AI deepfake tracking tool is alarming experts and creators
CNBC· 2025-12-02 12:00
Core Insights - YouTube has introduced a "likeness detection" tool to help creators remove AI-generated videos that exploit their likeness, but concerns have been raised about the use of creators' biometric data for training AI models [1][3][5] Group 1: YouTube's Likeness Detection Tool - The likeness detection tool scans videos to identify unauthorized use of a creator's face in deepfakes and is being expanded to millions of creators in the YouTube Partner Program [3][9] - To use the tool, creators must upload a government ID and a biometric video of their face, which raises concerns about the potential misuse of this sensitive data [4][5] - YouTube maintains that the biometric data is only used for identity verification and to power the safety feature, but experts caution that the policy allows for future misuse [5][8] Group 2: Industry Concerns and Expert Opinions - Experts have expressed concerns about YouTube's biometric policy, stating that creators should be cautious about giving control of their likeness to a platform [7][8] - Third-party companies like Vermillio and Loti are working with creators to protect their likeness rights, emphasizing the value of likeness in the AI era [7] - The rapid improvement of AI-generated video tools raises new concerns for creators, as their likeness and voice are central to their business [11]
Buffett's message to deepfake AI videos: "IT'S NOT ME."
Yahoo Finance· 2025-11-08 00:30
Fraud Concerns - Berkshire Hathaway expresses concern over AI-generated videos using Warren Buffett's image and a voice impersonating him [1] - Warren Buffett is worried about the proliferation of fraudulent AI videos [1] Investor Awareness - The statement highlights the potential for AI-generated content to mislead investors [1]
OpenAI's Sora 2 sparks AI 'slop' backlash
CNBC Television· 2025-10-02 16:55
OpenAI Valuation and Product Launch - OpenAI is reportedly hitting a $500 billion valuation [1] - OpenAI launched Sora 2, its first video generation app, similar to TikTok [2] - Sora 2 is considered a major improvement over Sora 1, generating deepfake-style clips [3] Competition and Safety Concerns - OpenAI faces competition in the video generation space from Meta, Google, ByteDance, and Alibaba [2] - Concerns are raised about OpenAI potentially prioritizing speed over safety, echoing past patterns [4][5] - Despite OpenAI's safety systems, users are finding ways to bypass guardrails [4][8] Content Moderation and User Responsibility - OpenAI insists on having robust safety systems, including content moderation and bans on explicit material [4] - Public figures must opt-in to have their likeness used in Sora 2 videos, while copyright holders must opt-out [9] - Even with safeguards, individuals may find ways to circumvent them, highlighting the challenges of content moderation [8]
Deepfake political scam ads surge on Meta platforms, watchdog says
TechXplore· 2025-10-02 08:50
Core Insights - The article highlights a significant rise in online fraud targeting US consumers, particularly through political scam ads on Meta's platforms [3][9] - A report from the Tech Transparency Project reveals that 63 scam advertisers spent a total of $49 million on Facebook and Instagram, primarily targeting seniors with misleading ads about government benefits [4][5] Group 1: Online Fraud and Scams - Surveys indicate a growing number of American adults are experiencing scams or impersonation attacks, with a notable increase in complaints from older adults [9][10] - The Federal Trade Commission reported a more than four-fold increase in complaints from older adults losing $10,000 or more to scammers since 2020 [10] Group 2: Meta's Role and Response - The Tech Transparency Project identified that scammers are exploiting advances in AI technology and Meta's lax content moderation to reach new victims [5][6] - Despite Meta's policies against scams, the report states that nearly half of the identified scam advertisers continued to run ads even after being flagged for policy violations [7] Group 3: Specific Examples of Scams - One notable scam involved an advertiser using a deepfake video of Donald Trump falsely promising stimulus checks, targeting individuals over 65 across more than 20 states [8][9] - The misleading ad directed users to a website claiming to offer a "FREE $5,000 Check from Trump," which was part of a broader trend of bogus stimulus offers circulating on social media [9]
GEN Boosts Cyber Safety With Norton Deepfake Protection on Intel PCs
ZACKS· 2025-10-01 15:41
Core Insights - Gen Digital (GEN) has introduced a new feature in Norton 360 to protect users from deepfake scams, collaborating with Intel to enable real-time protection on Intel Core Ultra processors [1][4] - The feature enhances existing scam protection by checking both video and audio for signs of fake content and is currently available in the US, UK, Australia, and New Zealand [2][3] Product Development - Continuous updates to Norton 360 reflect the company's commitment to addressing emerging AI scams, potentially attracting new customers and retaining existing ones [3][4] - The new feature is designed to provide users with greater confidence while browsing or consuming content online, as AI scams become increasingly sophisticated [4] Partnerships and Market Expansion - Gen Digital is expanding its deepfake protection beyond Intel to include Qualcomm and AMD, ensuring Norton protection is available across major AI PC platforms [5][7] - The scam detection features are already operational on Windows AI PCs powered by Qualcomm Snapdragon X chips, allowing real-time alerts without cloud dependency [6] - Support for AMD is expected later this year, further broadening the coverage for AI PC users [7][8] Strategic Positioning - This multi-partner approach positions Norton as a standard security solution for users seeking reliable on-device protection against scams and deepfakes, regardless of the processor brand [8] - The partnerships with Qualcomm and AMD enhance market reach and flexibility for customers, contributing to long-term growth opportunities in cyber safety and financial wellness [8]