PaLM

Search documents
突发|思维链开山作者Jason Wei被曝加入Meta,机器之心独家证实:Slack没了
机器之心· 2025-07-16 02:22
Core Viewpoint - Meta continues to recruit top talent from OpenAI, with notable researchers Jason Wei and Hyung Won Chung reportedly leaving OpenAI to join Meta [1][2][4]. Group 1: Talent Acquisition - Jason Wei and Hyung Won Chung, both prominent researchers at OpenAI, are confirmed to be leaving for Meta, with their Slack accounts already deactivated [2][4]. - Jason Wei is recognized as a key author of the Chain of Thought (CoT) concept, which has significantly influenced the AI large model field [4][6]. - Hyung Won Chung has been a core contributor to OpenAI's projects, including the o1 model, and has a strong background in large language models [4][29]. Group 2: Contributions and Impact - Jason Wei's work includes leading early efforts in instruction tuning and contributing to research on the emergent capabilities of large models, with over 77,000 citations on Google Scholar [21][16]. - Hyung Won Chung has played a critical role in the development of major projects like PaLM and BLOOM during his time at Google, and later at OpenAI, where he contributed to the o1 series models [26][40]. - Both researchers have been influential in advancing the capabilities of AI systems, particularly in reasoning and information retrieval [38][40]. Group 3: Community Reaction - Following the news of their potential move to Meta, the online community has expressed excitement and congratulations towards Jason Wei, indicating a strong interest in their career transition [10][9].
三位顶流AI技术人罕见同台,谈了谈AI行业最大的「罗生门」
3 6 Ke· 2025-05-28 11:59
Core Insights - The AI industry is currently experiencing a significant debate over the effectiveness of pre-training models versus first principles, with notable figures like Ilya from OpenAI suggesting that pre-training has reached its limits [1][2] - The shift from a consensus-driven approach to exploring non-consensus methods is evident, as companies and researchers seek innovative solutions in AI [6][7] Group 1: Industry Trends - The AI landscape is witnessing a transition from a focus on pre-training to exploring alternative methodologies, with companies like Sand.AI and NLP LAB leading the charge in applying multi-modal architectures to language and video models [3][4] - The emergence of new models, such as Dream 7B, demonstrates the potential of applying diffusion models to language tasks, outperforming larger models like DeepSeek V3 [3][4] - The consensus around pre-training is being challenged, with some experts arguing that it is not yet over, as there remains untapped data that could enhance model performance [38][39] Group 2: Company Perspectives - Ant Group's Qwen team, led by Lin Junyang, has faced criticism for being conservative, yet they emphasize that their extensive experimentation has led to valuable insights, ultimately reaffirming the effectiveness of the Transformer architecture [5][15] - The exploration of Mixture of Experts (MoE) models is ongoing, with the team recognizing the potential for scalability while also addressing the challenges of training stability [16][20] - The industry is increasingly focused on optimizing model efficiency and effectiveness, with a particular interest in achieving a balance between model size and performance [19][22] Group 3: Technical Innovations - The integration of different model architectures, such as using diffusion models for language generation, reflects a broader trend of innovation in AI [3][4] - The challenges of training models with long sequences and the need for effective optimization strategies are critical areas of focus for researchers [21][22] - The potential for future breakthroughs lies in leveraging increased computational power to revisit previously unviable techniques, suggesting a cycle of innovation driven by advancements in hardware [40][41]
自诩无所不知的大模型,能否拯救笨手笨脚的机器人?
Hu Xiu· 2025-05-06 00:48
从上海到纽约,世界各地的餐厅里都能看到机器人在烹饪食物。它们会制作汉堡、印度薄饼、披萨和炒菜。它们的原理与过去50年机器人制造其他产品的 方式如出一辙:精准执行指令,一遍又一遍地重复相同的操作步骤。 但Ishika Singh想要的不是这种"流水线"式的机器人,而是真正能"做晚饭"的机器人。它应该能走进厨房,翻找冰箱和橱柜,拿出各种食材搭配组合,烹调 出美味的菜肴,然后摆好餐具。对孩子而言,这也许很简单,但没有任何机器人能做到这一点。这需要太多关于厨房的知识,更需要常识、灵活性和应变 能力,但这些能力都超出了传统机器人编程的范畴。 南加州大学计算机科学博士生Singh指出,问题的症结在于机器人学家使用的经典规划流程。"他们需要把每一个动作,以及它的前提条件和预期效果都定 义清楚,"她解释道,"这要求事先设定环境中所有可能发生的情况。"可即使经过无数次试错,编写数千行代码,这样的机器人仍无法应对程序之外的突 发状况。 一个晚餐服务机器人在制定"策略"(执行指令的行动计划)时,不仅要知道当地的饮食文化(当地所谓的"辛辣"究竟指什么),还要熟悉具体厨房环境 (电饭煲是否放在高层的架子上)、服务对象的特殊情况(Hec ...
7B参数规模能力超越OpenAI !小米推出首个推理开源大模型Mimo【附大模型行业发展趋势分析】
Qian Zhan Wang· 2025-05-05 08:50
(图片来源:摄图网) 其中,中国科技公司在大模型领域掀起的开源浪潮,正以技术破局之势重塑全球人工智能创新版图。 据"小米大模型"公众号消息,小米开源首个为推理(Reasoning)而生的大模型「XiaomiMiMo」,联动预训 练到后训练,全面提升推理能力,目前MiMo-7B的全系列模型均已实现开源。 在数学推理(AIME24-25)和代码竞赛(LiveCodeBenchv5)公开测评集上,MiMo仅用7B的参数规模,超 越了OpenAI的闭源推理模型o1-mini和阿里Qwen更大规模的开源推理模型QwQ-32B-Preview。 小米技术团队表示,MiMo的核心突破在于预训练与后训练阶段的协同优化。在预训练阶段,模型通过挖掘 高质量推理语料并合成约2000亿tokens专项数据,采用三阶段渐进训练策略,累计训练量达25万亿tokens。 后训练阶段则引入创新强化学习技术,包括自研的"Test Difficulty Driven Reward"算法和"Easy Data Re- Sampling"策略,有效提升模型在复杂任务中的稳定性。技术团队还开发了"Seamless Rollout"系统,使训练 效率提 ...