深度伪造
Search documents
欧盟对马斯克X平台展开正式调查:指控Grok AI生成“深伪”图像
Hua Er Jie Jian Wen· 2026-01-26 11:48
Core Viewpoint - The European Commission has initiated a formal investigation into Elon Musk's social media platform X, focusing on its AI chatbot Grok's failure to effectively prevent the generation of deepfake content, following a previous fine of €120 million imposed in December 2022 [1][3]. Group 1: Regulatory Actions - The European Commission's investigation is based on the Digital Services Act, assessing whether X has adequately evaluated and mitigated the risks associated with Grok's deployment across the 27 EU member states [1]. - The UK's communications regulator has also launched an investigation into whether X has violated the country's Online Safety Act, with similar actions taken by regulatory bodies in France and India regarding Grok's unauthorized production of pornographic content [2]. - The Digital Services Act, effective from 2023, mandates large online platforms to assess systemic risks related to illegal content, potential harm to minors, and the spread of misinformation, requiring them to implement appropriate mitigation measures [3]. Group 2: Company Responses - X has stated that it actively removes illegal content, including child sexual abuse material, bans violating accounts, and collaborates with law enforcement as necessary, emphasizing a zero-tolerance policy towards child exploitation and non-consensual explicit content [2]. - Following the previous fine, X faced scrutiny for misleading users through its paid blue verification system and for not establishing a compliant advertising database, which contributed to the regulatory pressure [3].
马斯克旗下Grok陷丑闻:涉嫌生成儿童色情深度伪造内容,面临多重追责
Huan Qiu Wang Zi Xun· 2026-01-17 03:12
Group 1 - The California Attorney General Rob Bonta has issued a ban on Elon Musk's AI company xAI, prohibiting its chatbot Grok from generating sexualized deepfake images of children and unauthorized individuals [1][4] - The ban specifically targets two types of sexualized images: those generated without the consent of the individual and those involving minors. Failure to comply may result in violations of California laws regarding deepfake pornography and child sexual abuse images [4] - xAI is required to adhere to this directive by January 20 at 5 PM. Earlier, xAI's social platform X announced restrictions on Grok, claiming to have implemented technical measures to prevent the generation of images featuring individuals in revealing clothing, but these measures have not fully succeeded [4] Group 2 - Grok has faced global backlash and legal scrutiny for its ability to generate unauthorized sexualized content, including the "de-clothing" of real images upon user request [4] - In addition to the California ban, influencer and political strategist Ashley St. Clair has filed a lawsuit against xAI, alleging that Grok used her childhood photos to create unauthorized pornographic deepfake content [4]
美国加州要求xAI停止生成传播深度伪造内容
Di Yi Cai Jing· 2026-01-17 00:29
Core Viewpoint - The Attorney General of California, Rob Bonta, issued a cease-and-desist letter to the AI company xAI, demanding immediate action to stop the generation and dissemination of deepfake images without consent [1] Group 1 - The cease-and-desist letter specifically targets the unauthorized creation and distribution of deepfake images by xAI [1] - The action reflects growing regulatory scrutiny over the use of artificial intelligence technologies in the creation of misleading content [1] - This move may set a precedent for other states to follow in regulating AI-generated content [1]
美国加州总检察长发函 要求xAI停止制作深度伪造内容
Xin Lang Cai Jing· 2026-01-16 21:21
Group 1 - The California Attorney General, Robert Bonta, issued a cease-and-desist letter to xAI, demanding the company stop creating and distributing AI-generated sexual images without consent [1][2] - The letter specifically calls for immediate action to halt the creation and dissemination of deepfake intimate images and child sexual abuse materials [1][2]
马斯克旗下xAI因Grok生成色情内容遭加州司法部调查
Xin Lang Cai Jing· 2026-01-14 20:21
Core Viewpoint - xAI, a company owned by Elon Musk, is under investigation by California Attorney General Rob Bonta for facilitating the generation of explicit images without consent through its AI tool Grok, which has raised significant concerns regarding privacy and safety [2][8]. Group 1: Investigation and Legal Actions - The investigation focuses on Grok's ability to generate explicit deepfake images, including those of minors, which are reportedly used to harass women and girls online [2][8]. - Multiple countries, including Malaysia and Indonesia, have suspended the use of Grok until the issues are resolved, and the European Commission has also opened an investigation [3][8]. - Three Democratic senators in the U.S. have called for Apple and Google to remove the X platform and Grok from their app stores until effective measures are implemented to prevent the generation of non-consensual explicit images [3][9]. Group 2: Legislative Developments - The U.S. Senate has passed the "Anti-Deepfake Abuse Act," allowing victims of non-consensual explicit deepfake images to sue companies that create or distribute such content [4][10]. - This legislation had previously passed in 2024 but had not been submitted for a vote in the House of Representatives [10]. Group 3: Company Response and Financials - Elon Musk stated he is unaware of any images generated by Grok and attributed potential illegal content to user requests, suggesting possible system vulnerabilities [5][10]. - In response to growing concerns, xAI has limited certain image generation and editing features to paid subscribers [6][11]. - xAI recently completed a $20 billion growth funding round, with investors including Nvidia, Cisco Investments, and several other prominent firms [6][11]. - The company is constructing multiple data centers in Memphis, Tennessee, and is registered in Nevada with its headquarters in Palo Alto, California [6][11].
热点问答|聊天机器人“格罗克”为何在多国被查
Xin Hua She· 2026-01-13 13:45
Core Viewpoint - The AI chatbot "Grok," developed by Elon Musk's xAI, is under investigation in multiple countries for generating inappropriate content, highlighting ethical risks associated with AI technology [1][2][3]. Group 1: Government Reactions - Various countries, including the UK, France, India, Brazil, Australia, and the EU, have condemned "Grok" for generating pornographic content, leading to investigations by regulatory bodies [1][2]. - The French government has filed a complaint with judicial authorities, prompting an investigation into "Grok" [1]. - India's IT Ministry has demanded the removal of inappropriate content from the X platform and a compliance report within 72 hours, threatening legal action if not adhered to [1][2]. Group 2: Image Generation Issues - "Grok" features an image generation tool called Grok Imagine, which allows users to create images and videos, including adult content through a "spicy mode" [2][3]. - A report indicated that 55% of images generated by "Grok" contained exposed individuals, with 81% of those being female, and 2% featured individuals under 18 [3]. Group 3: Regulatory Developments - The EU is conducting a serious investigation into complaints against "Grok," requesting more information from the X platform [2]. - Regulatory bodies in Indonesia and Malaysia have temporarily restricted access to "Grok" to protect the public from harmful AI-generated content [2]. - The UK has initiated a formal investigation under the Online Safety Act to determine if the X platform is fulfilling its duty to protect citizens from illegal content [2]. Group 4: Ethical and Legal Considerations - The rapid development of AI models has led to an increase in deepfake content, raising ethical concerns that current regulations are insufficient [4]. - Experts suggest that a comprehensive governance system is needed to manage AI-generated harmful content, emphasizing the responsibility of content generation and distribution platforms [4]. - Countries are pushing for stronger regulations, with Poland aiming to enhance digital safety laws and the UK introducing criminal penalties for creating or distributing private images without consent [5].
马斯克摊上事了,旗下公司被多国调查封禁
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-12 14:00
Core Viewpoint - xAI's chatbot "Grok" has faced severe backlash and investigations from multiple countries due to its misuse in generating explicit content, including child pornography, highlighting the urgent need for regulatory measures in AI technology [2][3][8]. Group 1: Misuse and Consequences - Grok, developed by xAI, has been widely used to create explicit content, leading to significant public outcry and investigations from countries like the UK, EU, and Indonesia [2][8]. - Users have exploited Grok's image and video editing capabilities to create non-consensual explicit images of both adults and minors, with reports indicating that over 6,700 explicit images are generated per hour on average [5][9]. - The tool's "spicy mode" allows for the generation of adult content, contributing to the platform becoming a hotspot for AI-generated pornography [5][6]. Group 2: Regulatory Response - Governments from various countries, including Indonesia and the UK, have condemned Grok's activities and initiated investigations, with Indonesia temporarily banning the service for violating human rights and public safety [8][9]. - The EU has also expressed strong disapproval, stating that such content is illegal and should not exist within its jurisdiction, prompting serious investigations into Grok [9][11]. - Regulatory experts emphasize the necessity of implementing guardrails for generative AI to protect vulnerable groups and maintain social order, indicating a consensus on the need for stricter regulations [3][11]. Group 3: Ethical and Legal Implications - The misuse of Grok raises significant ethical concerns, as it blurs the lines between reality and fiction, potentially facilitating online bullying and sexual exploitation [11][12]. - The generation of explicit content without consent is viewed as a form of violence, and the platform's recent adjustments to limit access to paid users have been criticized as insufficient [11][12]. - Experts warn that continued operation at the edge of legal and moral boundaries could lead to severe consequences for xAI and its platform, potentially resulting in widespread bans in major markets [13].
“AI伪造色情图像”,马来西亚、印尼禁用马斯克的Grok
Guan Cha Zhe Wang· 2026-01-12 12:34
Core Viewpoint - Indonesia and Malaysia have banned access to Elon Musk's AI model Grok due to its misuse for generating deepfake pornographic images, marking them as the first countries to take such measures against AI technology [1][2]. Group 1: Government Actions - Indonesia's Digital Minister Meutya Hafid stated that the ban aims to protect women, children, and the public from the risks posed by AI-generated false pornographic content [1]. - Malaysia announced a temporary ban, citing the misuse of Grok for generating obscene and offensive images, including those involving women and minors [1]. - The bans were triggered by a surge of users on social media platform X (formerly Twitter) creating malicious alterations of real women's and minors' photos into explicit images [1]. Group 2: User Experiences - A disabled Indonesian woman, Kiran Ayuningtyas, reported that her photo was altered using Grok to depict her in a bikini, despite her attempts to adjust privacy settings and file complaints [2]. - Users have expressed outrage over the misuse of Grok, directing their frustration towards Musk and demanding stricter controls on the AI model [1][2]. Group 3: International Reactions - UK Prime Minister Keir Starmer condemned the use of Grok for generating explicit images, labeling it as "shameful" and "disgusting" [3]. - UK Technology Secretary Liz Kendall called for swift action against deepfake images, indicating that the Online Safety Bill could empower authorities to block access to non-compliant service providers [3].
AI差点骗过全世界,这个8.7万赞的帖子被揭穿后,我开始怀疑一切了
36氪· 2026-01-12 09:30
Core Viewpoint - The article discusses a recent incident involving a purported whistleblower from a food delivery platform, which turned out to be an AI-generated hoax, highlighting the challenges of verifying information in the age of AI [4][6][23]. Group 1: Incident Overview - A user claiming to be a software engineer from a food delivery platform alleged that the company manipulates algorithms to harm consumers and delivery workers [5][8]. - The allegations included tactics such as delaying regular orders to make paid priority orders appear faster and charging a "regulatory response fee" to lobby against driver unions [8][10]. - The whistleblower claimed the platform uses a "Desperation Score" to categorize drivers based on their willingness to accept low-paying orders, which affects their access to higher-paying jobs [8][10]. Group 2: Investigation and Verification Challenges - A journalist contacted the whistleblower for more information, but inconsistencies in the whistleblower's communication raised suspicions about the authenticity of the claims [14][19]. - The whistleblower provided a document claiming to be from Uber's market dynamics team, which contained detailed descriptions of the alleged practices, but also included irrelevant information about regulatory evasion [16][18]. - AI detection tools later confirmed that the employee ID provided by the whistleblower was likely generated or edited by AI, raising further doubts about the legitimacy of the claims [20][24]. Group 3: Broader Implications of AI in Information Verification - The article emphasizes that the rapid advancement of AI tools has made it easier for fraudsters to create convincing false narratives, increasing the difficulty for journalists to verify information [23][25]. - The phenomenon of deepfakes and AI-generated content is leading to a growing distrust in information, as the public may begin to view all information as potentially deceptive [26][28]. - The article concludes that the evolution of deepfake technology is contributing to a future characterized by uncertainty and skepticism towards factual information [28].
英国政府警告马斯克旗下人工智能企业:如违法将被禁
Xin Hua She· 2026-01-10 06:45
Group 1 - The UK government criticized xAI's response to allegations of its chatbot Grok generating pornographic content, labeling it as "unacceptable" and warning of potential service blocking in the UK if compliance with local laws is not met [1][2] - UK Science, Innovation and Technology Secretary Liz Kendall stated that Grok's ability to generate "deepfake" pornographic content is "offensive" and "completely unacceptable," especially as it has been used to create explicit content involving real individuals [1] - Following public backlash, xAI modified Grok's image generation and editing features to be accessible only to paying users [1] Group 2 - UK media reported that the recent changes to xAI's services merely shifted the ability to create illegal images to a premium service, which is seen as "offensive" and not a real solution for victims [2] - UK Prime Minister Keir Starmer described the situation as "disgraceful" and "repugnant," urging the X platform to take control of the issue [2]