Workflow
压缩即智能
icon
Search documents
李想为什么会说相信2027年实现L4?
理想TOP2· 2025-08-30 08:58
Core Viewpoint - The article discusses Li Xiang's belief in achieving Level 4 (L4) autonomous driving by 2027, based on three main points: the clear direction of enhancing AI capabilities, the perspective of pessimistic optimists like Li Xiang and Elon Musk, and the importance of presenting a vision to the capital market [2]. Group 1: AI Development and Autonomous Driving - The main trajectory of AI development since 2012 is "compression is intelligence," which emphasizes the ability to encode and predict vast amounts of seemingly chaotic data with shorter model descriptions [3]. - The three main lines to achieve this trajectory are foundation models, scaling laws, and emergent abilities [3]. - The concept of "compression is intelligence" indicates that a model's ability to predict future content reflects its understanding of the underlying structure, patterns, and causal relationships in the data [3]. - Current large language models (LLMs) have strong capabilities in understanding complex semantics, which can assist in solving the high cognitive demands of autonomous driving [4][5]. Group 2: Technical Aspects of Autonomous Driving - The scaling laws suggest that model performance improves with increased computational resources, data volume, and model parameters, although this is an empirical observation without mathematical proof [4]. - For the company, computational resources can be acquired through funding, while data volume relies on simulation data for reinforcement learning, necessitating the development of proprietary autonomous driving chips to meet latency requirements [5]. - The direction for enhancing vehicle capabilities is clear, akin to the significant advancements seen from GPT-1 to GPT-3.5 [6]. Group 3: Future Considerations and Innovations - While achieving L4 by 2027 may not be guaranteed, the specific architecture may evolve, and the company aims to enhance the vehicle's understanding of the physical world rather than merely addressing engineering problems [7]. - The company is capable of quickly assimilating core ideas from rapid developments in the AI sector, as evidenced by its adaptation of concepts from other models [7]. - The article highlights the importance of selective learning in reinforcement learning, where only verified solutions are used as learning signals, ensuring the quality of the training data [8][9]. Group 4: Research and Development Initiatives - The company collaborates with local scientific committees to fund research initiatives, aiming to engage with academic professionals to acquire the latest research findings [11].
小扎“超级智能”小组第一位大佬!谷歌DeepMind首席研究员,“压缩即智能”核心人物
量子位· 2025-06-12 01:37
Core Insights - Meta is aggressively recruiting top talent from competitors like Google and OpenAI to build a new AI team focused on Artificial General Intelligence (AGI) [23][24][28] - The recruitment strategy includes offering substantial compensation packages, with salaries ranging from $2 million to $9 million [28][31] - The urgency of this recruitment is driven by the competitive landscape in AI, where even Meta struggles to retain talent [29][31] Group 1: Recruitment Strategy - Meta has confirmed the hiring of Jack Rae, a prominent researcher from Google DeepMind, who was responsible for the Gemini model [2][7] - The company is also bringing in Johan Schalkwyk, the ML head from Sesame AI, as part of its talent acquisition efforts [3] - Meta's CEO, Mark Zuckerberg, is personally involved in the recruitment process, creating a high-priority team of around 50 members [25][26] Group 2: Competitive Landscape - The AI talent market is highly competitive, with Meta facing challenges in retaining its workforce despite offering high salaries [29][31] - Reports indicate that Meta has made offers to dozens of researchers from OpenAI and Google, highlighting the intense competition for skilled professionals [28] - The company aims to enhance its Llama model and develop more powerful AI tools to compete with industry leaders [24][23] Group 3: Research Focus - Jack Rae's expertise includes advancements in logical reasoning models and the concept of "compression as intelligence," which aligns with Meta's goals for AGI [12][13][17] - The new team will focus on improving AI capabilities, particularly in voice and personalized AI tools, to achieve a competitive edge [24][23] - The establishment of this new lab is seen as a significant strategic move for Meta in the AI domain [23][26]
全新预训练数据筛选方案,让数据效率提升10倍!配置仅需fastText评分器|港科大vivo出品
量子位· 2025-05-15 04:26
PreSelect团队 投稿 量子位 | 公众号 QbitAI vivo自研大模型用的数据筛选方法,公开了。 香港科技大学和vivo AI Lab联名提出 PreSelect ,目前已被ICML 2025接收。 这是一种轻量级且高效的数据选择方法:只需要训练和部署一个基于fastText的评分器,就可以减少10倍的计算需求。 该方法提出数据的 预测强度 (Predictive Strength) 的概念和计算公式,利用在不同模型上Loss有序性表征数据对特定能力的贡献,通过获 取特定能力的有效样本训练fastText分类器对全量训练数据进行筛选。 △ 论文标题:Predictive Data Selection: The Data That Predicts Is the Data That Teaches PreSelect:更客观、更轻量 现有的数据筛选方法主要分为两类:基于规则的筛选和基于模型的筛选。 基于规则的筛选依赖人工构建的先验规则,如C4 pipeline、Gopher rules,以及RefinedWeb和FineWeb的数据筛选流程。此类方法虽然实 现简单,但容易受到人工经验的限制,存在泛化 ...