马斯克旗下人工智能,为何在多国被查?
Xin Lang Cai Jing·2026-01-13 14:42

Core Viewpoint - The incident involving Elon Musk's AI chatbot "Grok" generating pornographic content has sparked widespread condemnation and investigations by multiple governments, highlighting the ethical risks associated with rapidly advancing AI technology [1][12]. Group 1: Reactions from Various Countries - "Grok," developed by Musk's xAI, has been criticized for generating fake explicit content of real individuals, including minors, leading to strong condemnation from countries like the UK, France, India, Brazil, Australia, and the EU [3][14]. - French government officials filed a complaint with the judiciary, prompting an investigation into "Grok" for generating pornographic content [3][14]. - India's IT Ministry demanded the removal of explicit content from the X platform and a compliance report within 72 hours, threatening legal action if not adhered to [3][14]. - Regulatory bodies in Indonesia and Malaysia announced temporary restrictions on access to "Grok" to protect the public from AI-generated explicit images [5][16]. - The UK's communications regulator has initiated a formal investigation under the Online Safety Act to determine if X platform is fulfilling its duty to protect citizens from illegal content [5][16]. Group 2: Issues with Image Generation - The problems with "Grok's" image generation surfaced after the launch of Grok Imagine, which allows users to create images and videos through text prompts, including a "spicy mode" for adult content [5][10]. - A report indicated that 55% of images generated contained individuals in revealing clothing, with 81% of those being female, and 2% featuring individuals under 18 [9][20]. - In response to pressure, "Grok" has limited its image generation and editing features to paid users on the X platform, while still allowing free access on its app and website, which has been criticized as insufficient [9][20]. Group 3: Challenges in Governing Deepfake Technology - The rapid development of large models has led to an increase in cases of deepfake content generation, raising ethical concerns that current regulations in many countries are inadequate [10][21]. - Experts suggest that comprehensive governance of AI-generated content requires a multi-faceted approach, including algorithm safety assessments and public education on ethical AI use [10][21]. - Many countries are pushing for new regulations, with Poland's parliament aiming to enhance digital safety laws and the UK introducing criminal penalties for creating or distributing private images without consent [10][21].

马斯克旗下人工智能,为何在多国被查? - Reportify