Workflow
量子位
icon
Search documents
马斯克起诉OpenAI和苹果垄断:App Store操纵排名,索赔数十亿美元
量子位· 2025-08-26 05:46
Core Viewpoint - Elon Musk's xAI has filed an antitrust lawsuit against OpenAI and Apple, accusing them of manipulating app rankings in the App Store to favor ChatGPT while suppressing Musk's own AI application, Grok [1][9][13]. Group 1: Lawsuit Details - The lawsuit claims that a partnership agreement between Apple and OpenAI constitutes an illegal monopoly in the AI market, leading to unfair competition [9][14]. - Musk alleges that the agreement allows ChatGPT to be the only generative AI chatbot integrated into Apple's iOS, iPadOS, and macOS, which significantly hampers Grok's visibility to billions of potential users [10][12]. - xAI is seeking billions of dollars in damages and a court ruling to declare the collaboration between OpenAI and Apple unlawful [14]. Group 2: Market Manipulation Allegations - Musk accuses Apple of manipulating App Store rankings and delaying updates for Grok, putting it at a disadvantage compared to ChatGPT [13]. - Observations from users indicate that the App Store is heavily promoting ChatGPT, while other generative AI applications, including Grok, are largely overlooked [3][25]. - Despite Musk's claims, some users noted that the AI application rankings are not exclusively dominated by ChatGPT, suggesting that competition still exists [30]. Group 3: Background Context - Musk's ongoing conflict with OpenAI is well-documented, stemming from his belief that the organization has strayed from its original non-profit mission to pursue profit [16][17]. - Previous tensions between Musk and Apple CEO Tim Cook have included Musk's criticism of Apple's App Store fees and a failed attempt to discuss a potential acquisition of Tesla by Apple [22][23]. - The current lawsuit reflects Musk's consistent adversarial stance towards both OpenAI and Apple, indicating a long-standing rivalry in the tech industry [20][24].
世界首例!中国团队将基因编辑猪肺成功移植人体
量子位· 2025-08-26 05:46
Core Viewpoint - The successful transplantation of gene-edited pig lungs into a human marks a significant milestone in xenotransplantation, potentially addressing the shortage of human organ donors [1][5][12]. Group 1: Research Achievement - A team led by Professor He Jianxing from Guangzhou Medical University successfully transplanted a gene-edited left lung from a Bama pig into a brain-dead patient, with the lung functioning for 9 days [1][12]. - The research was published on August 25 in the journal Nature Medicine, receiving high praise from international experts as a milestone in the field [2][4]. Group 2: Implications for Organ Transplantation - This achievement highlights the potential of gene-edited pig organs to be more compatible with human biology, which could alleviate the shortage of lung transplant donors [5][9]. - The global demand for organ transplants is increasing, and xenotransplantation is seen as a promising solution to the donor shortage crisis [9]. Group 3: Technical Details - The transplanted pig lung underwent six CRISPR gene edits to silence three genes that trigger immune responses and to introduce three human protein genes to reduce immune risks [14]. - Despite some immune response and organ damage observed, there was no immediate strong rejection of the gene-edited pig lung [6][17]. Group 4: Future Directions - The research team plans to optimize gene editing strategies and anti-rejection treatments to extend the survival and functionality of transplanted organs [13][26]. - Further studies are needed to address challenges related to organ rejection and infection before this procedure can be translated into clinical practice [26].
英伟达咽喉上的苏州女人
量子位· 2025-08-26 05:46
Core Viewpoint - The article highlights the remarkable rise of InnoScience, a Chinese company that has become a key player in the gallium nitride (GaN) semiconductor market, particularly as a supplier for NVIDIA, showcasing its rapid growth and innovative strategies in a traditionally dominated industry [1][2][3]. Group 1: Company Background and Growth - InnoScience started as a small workshop in Zhuhai, China, and has transformed into a major player in the semiconductor industry, attracting significant investments totaling 6 billion yuan over seven years [3][4]. - The company is set to go public on the Hong Kong Stock Exchange in December 2024, with a current market valuation of 72.268 billion HKD [4]. - InnoScience has achieved a remarkable growth trajectory, with revenue increasing from 68.22 million yuan in 2021 to 593 million yuan in 2023, establishing itself as the leading company in the global GaN power semiconductor market with a market share of 42.4% [31][34]. Group 2: Technological Innovations - The company adopted an Integrated Device Manufacturer (IDM) model, controlling the entire semiconductor production process from design to sales, which is crucial for maintaining pricing power [15][30]. - InnoScience focused on developing 8-inch wafer technology, which allows for more efficient cost distribution and better performance compared to the industry-standard 6-inch technology [17][19]. - The company became the first globally to achieve mass production of 8-inch silicon-based GaN wafers in under six years, a feat that typically takes over a decade [30]. Group 3: Market Position and Future Prospects - InnoScience's chips have penetrated over 100 niche markets, with a customer base expanding to 140, including major players like BYD and various tech giants [99][102]. - The demand for GaN semiconductors is surging, particularly in data centers, driven by the increasing power requirements of AI technologies, with the GaN market in data centers growing from less than 10 million yuan in 2019 to 70 million yuan in 2023 [64][66]. - The collaboration with NVIDIA is expected to scale up significantly as NVIDIA transitions to an 800V direct current power architecture by 2027, positioning InnoScience as a critical supplier in the AI era [39][66][67].
Meta万引强化学习大佬跑路!用小扎原话作为离别寄语,扎心了
量子位· 2025-08-26 04:36
Core Viewpoint - The departure of Rishabh Agarwal from Meta highlights a potential trend of employee attrition within the company, raising concerns about internal conflicts and employee satisfaction amidst a hiring spree [1][22][24]. Group 1: Rishabh Agarwal's Departure - Rishabh Agarwal, a prominent figure in reinforcement learning at Meta, is leaving the company after 7.5 years, expressing a desire to explore a completely different path [1][17]. - His contributions include significant work on models like Gemini 1.5 and Gemma 2, and he received the Outstanding Paper Award at NeurIPS in 2021 for his research on statistical instability in deep reinforcement learning [4][14][13]. - Agarwal's next steps remain uncertain, but speculation suggests he may venture into entrepreneurship [17]. Group 2: Employee Turnover at Meta - Agarwal's exit is part of a broader trend, as another long-term employee with 12 years at Meta also announced their departure, joining a competing firm, Anthropic [18][19]. - Reports indicate that tensions between new and old employees regarding salary disparities have led to dissatisfaction, prompting some researchers to threaten resignation [23][24]. - The current hiring surge at Meta may be exacerbating internal conflicts, contributing to the trend of experienced employees leaving the company [22][24].
物理学又一乌云消散,希格斯玻色子衰变为μ子新证据出现,或超越标准模型
量子位· 2025-08-26 04:36
Core Viewpoint - The ATLAS team at CERN has made significant advancements in understanding the Higgs boson, providing strong evidence for its decay into muons and improving detection sensitivity for its decay into Z bosons and photons, potentially revealing physics beyond the Standard Model [1][3][6][7]. Group 1: Higgs Boson Decay Findings - The ATLAS experiment aims to address fundamental questions regarding the consistency of Higgs interactions with the Standard Model and whether it is the sole source of mass for all fundamental particles [8]. - The decay process H→μμ (Higgs boson decaying into a pair of muons) is extremely rare, occurring approximately once in every 5,000 Higgs decays [9]. - Despite its rarity, this decay provides the best opportunity to study the interaction between the Higgs boson and second-generation fermions, which is crucial for understanding the origin of mass for different generations of particles [10]. - Identifying this rare decay is challenging due to its signal being easily obscured by thousands of muon pairs produced through other processes [11]. - The ATLAS experiment utilized data from different operational phases of the LHC, including Run-2 and Run-3, and developed complex background modeling methods to classify recorded events and improve signal detection [12]. - By combining data from Run-2 and Run-3, ATLAS has observed evidence for H→μμ decay with a significance of 3.4 standard deviations, indicating a less than 0.3% probability of statistical fluctuation [13][14]. Group 2: H→Zγ Decay Findings - The decay H→Zγ involves the Higgs boson decaying into a Z boson and a photon, with the Z boson further decaying into electron or muon pairs [17]. - This decay is also rare and occurs through a virtual particle "loop," which could provide clues to physics beyond the Standard Model if new particles contribute to this loop [18]. - Identifying H→Zγ decay is challenging, as the Z boson decays into detectable leptons only about 6% of the time, significantly reducing its observability [18]. - The complex conditions of LHC Run 3, including increased pile-up collisions, further complicate the identification of H→Zγ signals [18]. - By combining data from Run-2 and Run-3 and employing advanced modeling and event classification techniques, ATLAS reported an excess observation for H→Zγ decay with a significance of 2.5 standard deviations, providing the most stringent expected sensitivity for measuring the branching ratio of this decay to date [19]. Group 3: Background Knowledge on Higgs Boson - The Higgs boson, also known as the "God particle," was proposed by Nobel laureate Peter Higgs and is a zero-spin boson that is electrically and color neutral, highly unstable, and decays almost immediately after being produced [24][25]. - The term "God particle" originated from a 1993 book by physicist Leon Lederman, who initially intended to use a more vulgar term but opted for a more marketable name [27]. - The Higgs boson is a manifestation of the Higgs field, which is hypothesized to permeate the universe, allowing certain fundamental particles to acquire mass through their interaction with this field [34][35]. - The Standard Model describes the fundamental forces and particles, including fermions and bosons, and explains how particles acquire mass through the Higgs mechanism [37][39].
视觉Token注入CLIP语义,走向多模态理解与生成新范式
量子位· 2025-08-26 04:36
腾讯ARC Lab 投稿 量子位 | 公众号 QbitAI 让视觉token说话,既能 看懂 图像,又可以 画出 图像! 腾讯ARC Lab 联合中科院自动化所、香港城市大学、浙江大学等机构提出了一种全新的视觉分词器—— TokLIP ,即Token+CLIP。 可以将低级的离散视觉Token与高级的CLIP语义相结合,实现多模态理解与生成的高效统一。 不仅支持端到端的自回归训练,还能无缝接入现有LLM框架,极大降低了多模态模型的计算与数据门槛。 训练数据量仅需同类方法的 20% ,还可以在图像分类、图文检索和多模态理解等多项任务中达成 SOTA ,有理由相信,TokLIP或将成为构 建下一代多模态通用模型的重要基础组件。 下面是更多详细内容介绍。 TokLIP 的结构与核心设计 过去几年里,人工智能的发展已经从单一模态走向多模态,无论是图像、视频,还是文本,人们希望机器能够像人类一样,既能"看懂"世界, 也能"说清"所见。 其中关键问题是: 如何在同一个模型中实现统一的理解 (comprehension) 与生成 (generation) 能力 。 目前的自回归多模态大模型对图像的编码大多依赖两类核心部件。 ...
最新智能体自动操作手机电脑,10个榜单开源SOTA全拿下|通义实验室
量子位· 2025-08-25 23:05
Core Viewpoint - The article discusses the launch of the Mobile-Agent-v3 framework by Tongyi Lab, which achieves state-of-the-art (SOTA) performance in automating tasks on mobile and desktop platforms, showcasing its ability to perform complex tasks through a multi-agent system [2][9]. Group 1: Framework and Capabilities - The Mobile-Agent-v3 framework can independently execute complex tasks with a single command and seamlessly switch roles within a multi-agent framework [3][9]. - It has achieved SOTA performance across ten major GUI benchmarks, demonstrating both foundational capabilities and reasoning generalization [9][11]. Group 2: Data Production and Model Training - The framework relies on a robust cloud infrastructure built on Alibaba Cloud, enabling large-scale parallel task execution and data collection [11][13]. - A self-evolving data production chain automates data collection and model optimization, creating a feedback loop for continuous improvement [13][15]. - The model is trained using high-quality trajectory data, which is generated through a combination of historical task data and large-scale pre-trained language models [22][23]. Group 3: Task Execution and Understanding - The framework emphasizes precise interface element localization, allowing the AI to understand the graphical interface effectively [18][19]. - It incorporates complex task planning, enabling the AI to strategize before executing tasks, enhancing its ability to handle long-term and cross-application tasks [21][22]. - The model understands the causal relationship between actions and interface changes, which is crucial for effective task execution [24][25]. Group 4: Reinforcement Learning and Performance - The Mobile-Agent team employs reinforcement learning (RL) to enhance the model's decision-making capabilities through real-time interactions [28][29]. - An innovative TRPO algorithm addresses the challenges of sparse and delayed reward signals in GUI tasks, significantly improving learning efficiency [31][36]. - The framework has shown a performance increase of nearly 8 percentage points in dynamic environments, indicating its self-evolution potential [36][40]. Group 5: Multi-Agent Collaboration - The Mobile-Agent-v3 framework supports multi-agent collaboration, allowing different agents to handle various aspects of task execution, planning, reflection, and memory [33][34]. - This collaborative approach creates a closed-loop enhancement pipeline, improving the overall efficiency and effectiveness of task execution [34][35]. - The framework's design enables AI to act with purpose, adjust based on feedback, and retain critical information for future tasks [35][36].
售价2万5!英伟达推出机器人“最强大脑”:AI算力飙升750%配128GB大内存,宇树已经用上了
量子位· 2025-08-25 23:05
Core Insights - NVIDIA has launched the Jetson Thor, a new robotic computing platform that integrates server-level computing power into robots, achieving an AI performance of 2070 TFLOPS, which is 7.5 times higher than the previous generation Jetson Orin, with a 3.5 times improvement in energy efficiency [1][3][4]. Performance and Specifications - Jetson Thor features a massive 128GB memory configuration, unprecedented in edge computing devices [2]. - The platform is built on the Blackwell GPU architecture, supporting multiple AI models simultaneously on edge devices [6]. - The Jetson AGX Thor developer kit is priced at $3499 in the U.S. (approximately 25,000 RMB), while the T5000 module is available for $2999 for bulk purchases [8][9]. Technical Features - The Jetson Thor includes advanced specifications such as a GPU with 2560 CUDA cores and 96 fifth-generation Tensor Cores, and a CPU with 14 Arm Neoverse V3AE cores, significantly enhancing real-time control and task management capabilities [11][13]. - It supports high bandwidth with 128GB LPDDR5X memory and 273GB/s memory bandwidth, crucial for large Transformer inference and high-concurrency video encoding [13]. - The platform can achieve a response time of 200 milliseconds for the first token and generate over 25 tokens per second, enabling real-time human-robot interaction [16]. Industry Adoption - Several Chinese companies, including Unisound Medical and Youbik, are integrating Jetson Thor into their systems, highlighting its impact on robot agility, decision-making speed, and autonomy [19]. - Boston Dynamics is incorporating Jetson Thor into its Atlas humanoid robot, allowing it to utilize computing power previously only available in servers [20]. - Agility Robotics plans to use Jetson Thor as the core computing unit for its sixth-generation Digit robot, enhancing its logistics capabilities [21]. Software and Development - Jetson Thor is optimized for various AI frameworks and models, supporting NVIDIA's Isaac for simulation and development, and Holoscan for sensor workflows [14]. - The platform facilitates a continuous training-simulation-deployment cycle, ensuring ongoing upgrades to robotic capabilities even after deployment [25]. Future Outlook - NVIDIA emphasizes the need for a triad of computing systems for effective physical AI and robotics: a DGX system for training, an Omniverse platform for simulation, and the Jetson Thor as the robot's brain [23].
为防AI刷题,Nature等顶刊最新封面被做成数据集,考验模型科学推理能力|上海交通大学
量子位· 2025-08-25 15:47
Core Viewpoint - The article discusses the development of the MAC (Multimodal Academic Cover) benchmark, which aims to evaluate the true capabilities of advanced AI models like GPT-4o and Gemini 2.5 Pro by using the latest scientific content for testing, addressing the challenge of outdated "question banks" in AI assessments [1][5]. Group 1: Benchmark Development - The MAC benchmark utilizes the latest covers from 188 top journals, including Nature, Science, and Cell, to create a testing dataset from over 25,000 image-text pairs, ensuring that the AI models are evaluated on the most current and complex scientific concepts [3][4]. - The research team designed two testing tasks: "selecting text from images" and "selecting images from text," to assess the AI's understanding of the deep connections between visual elements and scientific concepts [17][18]. Group 2: Testing Results - The results revealed that even top models like Step-3 achieved only a 79.1% accuracy when faced with the latest scientific content, indicating significant limitations in their performance compared to their near-perfect results on other benchmarks [4][19]. - The study highlighted that models such as GPT-5-thinking and Gemini 2.5 Pro, while proficient in visual recognition, still struggle with deep reasoning tasks that require cross-modal scientific understanding [19]. Group 3: Dynamic Benchmarking Mechanism - The MAC benchmark introduces a dynamic approach to testing by continuously updating the dataset and questions, which helps maintain the challenge level as scientific knowledge evolves [24][26]. - The research team conducted a comparison experiment showing that all models performed worse on the latest data (MAC-2025) compared to older data (MAC-Old), demonstrating that the natural evolution of scientific knowledge provides ongoing challenges for AI models [26]. Group 4: DAD Methodology - The DAD (Divide and Analyze) method was proposed to enhance AI performance by structuring the reasoning process into two phases: a detailed visual description followed by high-level analysis, simulating human expert thinking [21][22]. - This two-step approach significantly improved the accuracy of multiple models, showcasing the effectiveness of extending reasoning time in multimodal scientific understanding tasks [22][23]. Group 5: Future Prospects - The MAC benchmark is expected to evolve into a more comprehensive evaluation platform, with plans to include more scientific journals and dynamic scientific content such as conference papers and news [28]. - As AI capabilities approach human levels, the MAC benchmark will serve as a "touchstone" to better understand the boundaries of AI capabilities and the path toward true intelligence [28].
Karpathy氛围编程最新指南!三层AI编程结构:顺境Cursor,逆境Claude,绝境GPT-5 Pro
量子位· 2025-08-25 15:47
Core Viewpoint - The article discusses the new Vibe Coding guide released by Karpathy, which introduces a three-layer structure for AI programming that leverages multiple models to enhance coding efficiency and effectiveness [1][3]. Group 1: Three-Layer Structure - The three layers consist of Cursor for auto-completion and minor code modifications, Claude Code/Codex for larger functional blocks, and GPT-5 Pro for solving complex bugs and providing in-depth documentation [4][6]. - This structure is based on Karpathy's practical programming experience, categorizing tools by their usage frequency and task types [5][6]. - The first layer, Cursor, handles about 75% of common tasks through auto-completion [9]. Group 2: Tool Utilization and Limitations - Cursor allows for high-bandwidth communication with large language models by embedding specific code snippets or comments to convey task intentions clearly [11][12]. - Claude Code/Codex is used for implementing larger functions, especially in unfamiliar programming areas, and can generate visual or debugging code quickly [16]. - However, AI-generated code often lacks elegance and may require manual cleanup due to issues like excessive complexity and poor coding style [17][18]. Group 3: Advanced Problem Solving - GPT-5 Pro is utilized for the most challenging problems, effectively identifying bugs after other models fail to do so [20]. - Karpathy emphasizes the importance of community feedback and shared experiences in refining the Vibe Coding concept, which has evolved since its initial introduction [23][22]. Group 4: Community Insights - Users share similar workflows, indicating that small issues are often resolved through AI auto-completion, while larger problems require more oversight and direction [27][29]. - The article highlights the necessity of providing detailed requirements and acceptance criteria to ensure consistency in AI-generated code [31].