Workflow
机器之心
icon
Search documents
首创Mid-training范式破解RL奥秘,Llama终于追平Qwen!
机器之心· 2025-06-30 09:49
论文链接:https://arxiv.org/abs/2506.20512 代码仓库:https://github.com/GAIR-NLP/OctoThinker 近期,一份来自上海创智学院、上海交通大学的前沿研究论文吸引了人工智能领域的广泛关注。该论文深入探讨了不同基础语言模型家族(如 Llama 和 Qwen)在 强化学习(RL)训练中迥异表现的背后原因,并提出创新性的中期训练(mid-training)策略,成功地将 Llama 模型改造成高度适配强化学习的推理基础模型,显 著缩小了其与天生擅长 RL 扩展的 Qwen 模型之间的性能差距,为下一代 reasoning 能力 AI 系统的开发提供了关键的科学基础和技术路径。 论文发布后在社交媒体引发广泛关注,Meta AI 研究科学家、即将赴 UMass Amherst 任助理教授的 Wenting Zhao 率先盛赞:"Truly impressed by how an academic lab just figured out a lot of mysteries in mid-training to close the RL gap betwee ...
微软推出深度视频探索智能体,登顶多个长视频理解基准
机器之心· 2025-06-30 03:18
Core Viewpoint - The article discusses the limitations of large language models (LLMs) and large visual-language models (VLMs) in processing information-dense long videos, and introduces a novel agent called Deep Video Discovery (DVD) that significantly improves video understanding through advanced reasoning capabilities [1][3]. Group 1: Deep Video Discovery (DVD) Overview - DVD segments long videos into shorter clips and treats them as an environment, utilizing LLMs for reasoning and planning to answer questions effectively [3][6]. - The system achieved a remarkable accuracy of 74.2% on the challenging LVBench dataset, surpassing previous models significantly [3][17]. - DVD will be open-sourced in the form of MCP Server, enhancing accessibility for further research and development [3]. Group 2: System Components - The system consists of three core components: a multi-granularity video database, a search-centric toolset, and an LLM as the agent coordinator [7][10]. - The multi-granularity video database converts long videos into a structured format, extracting various levels of information such as global summaries and segment-level details [10]. - The agent employs three main tools: Global Browse for high-level context, Clip Search for efficient semantic retrieval, and Frame Inspect for detailed pixel-level information [11][12][13]. Group 3: Performance Evaluation - DVD's performance was evaluated across multiple long video benchmarks, consistently outperforming existing models, including a 13.4% improvement over MR. Video and a 32.9% improvement over VCA [17]. - With auxiliary transcripts, the accuracy further increased to 76.0%, demonstrating the system's robustness [17]. - The analysis of different foundational models revealed significant behavioral differences, emphasizing the importance of reasoning capabilities in the agent's performance [18].
刚刚,OpenAI全员放假一周!被Meta高薪连挖8人「偷家」,真麻了
机器之心· 2025-06-30 03:18
机器之心报道 机器之心编辑部 面对 Meta 一亿美元签字费挖人的条件,OpenAI 的回应是…… 随着高级研究人员接连被竞争对手挖走,OpenAI 高管向团队成员保证,公司不会「袖手旁观」。据《连线》杂志报道,上周六,OpenAI 首席研究官 Mark Chen 向员工发出了一份措辞强硬的备忘录,承诺要在顶尖研究人才争夺战中与 Meta 进行正面交锋。 这一次,《连线》甚至以 「OpenAI 领导层回应 Meta 挖角:有人闯进了我们家」为题进行了专题报道。 生成式 AI 竞争如火如荼的当口, OpenAI 却突然宣布本周全员放假,还是直接放一周。 这当然不是因为 GPT-5 已经造好,或是竞争对手全被打败了,而是因为 OpenAI 被挖人挖麻了。 在那份备忘录中,Mark Chen 表示:「我现在有一种强烈的预感,就像有人闯入我们家偷了东西一样。请相信,我们并没有袖手旁观。」 就在几天前,Meta 首席执行官马克・扎克伯格成功从 OpenAI 招募了四名高级研究人员加入 Meta 的「超级智能实验室」。而更早几天,Meta 更是将 OpenAI 苏黎 世办公室的三位研究者一锅端走。详情可参阅我们之前的两篇 ...
用好视觉Attention局部性,清华、字节提出Token Reorder,无损实现5倍稀疏、4比特量化
机器之心· 2025-06-30 03:18
Core Viewpoint - The article discusses the challenges and solutions in optimizing attention mechanisms for visual generation models, focusing on the need for efficient algorithms that can handle increasing input sequence lengths and the unique data distribution of visual attention patterns [3][11][15]. Group 1: Analysis Framework - A systematic analysis framework is proposed to identify key challenges in attention optimization for visual generation tasks, particularly the "diverse and dispersed" attention patterns [3][6]. - The article emphasizes that these diverse attention patterns can be unified into a "local aggregation" block pattern, which simplifies the design of sparse attention mechanisms [3][15]. Group 2: Sparse Attention and Low-Bit Quantization - Existing sparse attention methods face challenges in adapting to diverse attention patterns, leading to difficulties in designing effective sparse masks [7][11]. - The article introduces a novel approach of "reorganizing attention patterns" to unify complex attention modes into hardware-friendly block patterns, enhancing the effectiveness of sparse designs [7][19]. - For low-bit quantization, the article analyzes the key issues related to quantization loss and proposes solutions to minimize these losses through better data distribution management [8][12]. Group 3: Proposed Solution - The proposed "Token Reordering" scheme aims to transform attention maps into a unified block pattern, facilitating both sparsity and quantization [14][19]. - The article highlights that each attention head exhibits consistent local aggregation in specific dimensions, allowing for tailored token reordering strategies [19][24]. Group 4: Performance and Efficiency - Experimental results indicate that the proposed PAROAttention method maintains superior algorithm performance while achieving significant hardware efficiency improvements, outperforming existing sparse attention methods [45][55]. - The method demonstrates a notable reduction in overhead, with the additional costs kept below 1%, showcasing its hardware-friendly nature [57][58]. Group 5: Broader Implications - The insights gained from the analysis of visual attention patterns can inform the design of training methods and parameterization strategies for visual models, potentially leading to the development of more effective foundational models in the field [58].
打破长视频理解瓶颈:HoPE混合位置编码提升VLM长度泛化能力
机器之心· 2025-06-29 04:23
来自 CMU 和小红书的研究团队对这一问题进行了深入研究,他们首次提出了针对多模态 RoPE 扩展策略的理论评估框架, 指出现有多模态 RoPE 泛化能力不足的原因之一是保留 RoPE 中所有频率对长上下文语义建模有负面影响。基于此分析,他 们提出的混合位置编码(HoPE, Hybrid of Position Embedding)大幅提升了 VLM 的长度泛化能力,在长视频理解和检索等 任务中达到最优表现。 李浩然,CMU 机器学习系研究生,研究方向是基础模型的长上下文建模、对齐、以及检索增强生成。 如今的视觉语言模型 (VLM, Vision Language Models) 已经在视觉问答、图像描述等多模态任务上取得了卓越的表现。然 而,它们在长视频理解和检索等长上下文任务中仍表现不佳。 虽然旋转位置编码 (RoPE, Rotary Position Embedding) 被广泛用于提升大语言模型的长度泛化能力,但是如何将 RoPE 有效 地扩展到多模态领域仍然是一个开放问题。具体而言,常用的扩展方法是使用 RoPE 中不同的频率来编码不同的位置信息 (x,y,t)。然而,由于 RoPE 中每个维度携带 ...
盘一盘,2017年Transformer之后,LLM领域的重要论文
机器之心· 2025-06-29 04:23
Core Insights - The article discusses Andrej Karpathy's concept of "Software 3.0," where natural language becomes the new programming interface, and AI models execute specific tasks [1][2]. - It emphasizes the transformative impact of this shift on developers, users, and software design paradigms, indicating a new computational framework is being constructed [2]. Development of LLMs - The evolution of Large Language Models (LLMs) has accelerated since the introduction of the Transformer architecture in 2017, leading to significant advancements in the GPT series and multimodal capabilities [3][5]. - Key foundational papers that established today's AI capabilities are reviewed, highlighting the transition from traditional programming to natural language interaction [5][6]. Foundational Theories - The paper "Attention Is All You Need" (2017) introduced the Transformer architecture, which relies solely on self-attention mechanisms, revolutionizing natural language processing and computer vision [10][11]. - "Language Models are Few-Shot Learners" (2020) demonstrated the capabilities of GPT-3, establishing the "large model + large data" scaling law as a pathway to more general artificial intelligence [13][18]. - "Deep Reinforcement Learning from Human Preferences" (2017) laid the groundwork for reinforcement learning from human feedback (RLHF), crucial for aligning AI outputs with human values [15][18]. Milestone Breakthroughs - The "GPT-4 Technical Report" (2023) details a large-scale, multimodal language model that exhibits human-level performance across various benchmarks, emphasizing the importance of AI safety and alignment [26][27]. - The release of LLaMA models (2023) demonstrated that smaller models trained on extensive datasets could outperform larger models, promoting a new approach to model efficiency [27][30]. Emerging Techniques - The "Chain-of-Thought Prompting" technique enhances reasoning in LLMs by guiding them to articulate their thought processes before arriving at conclusions [32][33]. - "Direct Preference Optimization" (2023) simplifies the alignment process of language models by directly utilizing human preference data, making it a widely adopted method in the industry [34][35]. Important Optimizations - The "PagedAttention" mechanism improves memory management for LLMs, significantly enhancing throughput and reducing memory usage during inference [51][52]. - The "Mistral 7B" model showcases how smaller models can achieve high performance through innovative architecture, influencing the development of efficient AI applications [55][56].
Gary Marcus惊世之言:纯LLM上构建AGI彻底没了希望!MIT、芝大、哈佛论文火了
机器之心· 2025-06-29 04:23
Core Viewpoint - The article discusses a groundbreaking paper co-authored by MIT, the University of Chicago, and Harvard, which reveals significant inconsistencies in reasoning patterns of large language models (LLMs), termed "Potemkin understanding," suggesting that the hope of creating Artificial General Intelligence (AGI) based solely on LLMs is fundamentally flawed [2][4]. Summary by Sections Introduction - Gary Marcus, a prominent AI scholar, highlights the paper's findings, indicating that even top models like o3 frequently exhibit reasoning errors, undermining the notion of their understanding and reasoning capabilities [2][4]. Key Findings - The paper argues that success in benchmark tests does not equate to genuine understanding but rather reflects a superficial grasp of concepts, leading to a "Potemkin understanding" where models provide seemingly correct answers that mask a deeper misunderstanding [3][17]. - The research team identifies two methods to quantify the prevalence of the Potemkin phenomenon, revealing that it exists across various models, tasks, and domains, indicating a fundamental inconsistency in conceptual representation [17][28]. Experimental Results - The study analyzed seven popular LLMs across 32 concepts, finding that while models could define concepts correctly 94.2% of the time, their performance in applying these concepts in tasks significantly declined, as evidenced by high Potemkin rates [29][33]. - The Potemkin rate, defined as the proportion of incorrect answers following correct responses on foundational examples, was found to be high across all models and tasks, indicating widespread issues in conceptual application [30][31]. Inconsistency Detection - The research also assessed internal inconsistencies within models by prompting them to generate examples of specific concepts and then asking them to evaluate their own outputs, revealing substantial limitations in self-assessment capabilities [36][39]. - The inconsistency scores ranged from 0.02 to 0.64 across all examined models, suggesting that misunderstandings stem not only from incorrect concept definitions but also from conflicting representations of the same idea [39][40]. Conclusion - The findings underscore the pervasive nature of the Potemkin understanding phenomenon in LLMs, challenging the assumption that high performance on traditional benchmarks equates to true understanding and highlighting the need for further research into the implications of these inconsistencies [40].
充分激发模态协作,MokA量身打造MLLM微调新范式
机器之心· 2025-06-29 02:21
Core Viewpoint - The article discusses the limitations of current multimodal large model (MLLM) fine-tuning methods, which often replicate strategies from unimodal language models without considering the unique characteristics of multimodal learning [2][9][23]. Summary by Sections Introduction to MLLMs - Recent advancements in MLLMs have been significant in tasks involving visual-language and audio-language [2]. - Current fine-tuning methods primarily adapt strategies from unimodal language models, such as LoRA, which may not be suitable for multimodal contexts [2][8]. Limitations of Current Fine-Tuning Methods - Many efficient multimodal fine-tuning methods overlook the essential differences between modalities, leading to inadequate utilization of multimodal information [9][11]. - The article emphasizes the need for both unimodal adaptation and cross-modal adaptation in effective multimodal fine-tuning [9][12]. Introduction of MokA Method - The research team proposes a new method called MokA (Multimodal low-rank Adaptation), which balances the independent modeling of unimodal information and the interaction modeling between modalities [3][12][23]. - MokA retains the efficiency of LoRA while redefining the roles of projection matrices in a multimodal context [14][23]. Key Components of MokA - MokA includes three critical modules: 1. **Modality-specific A matrix**: Ensures independent modeling of unimodal information [15]. 2. **Cross-modal attention mechanism**: Enhances interaction between different modalities during instruction tuning [16]. 3. **Shared B matrix**: Facilitates implicit cross-modal alignment by projecting modalities into a shared space [17]. Experimental Results - MokA was evaluated across three representative multimodal task scenarios: audio-visual-text, visual-text, and speech-text [19]. - The method demonstrated significant performance improvements on various benchmark datasets, showcasing its adaptability and effectiveness [19][23]. Conclusion - MokA addresses the oversight of modality differences in current fine-tuning paradigms, providing a new direction for multimodal large model fine-tuning [23].
刚刚,OpenAI四位华人学者集体被挖,还是Meta重金出手
机器之心· 2025-06-29 02:21
Core Insights - Meta has recently hired four researchers from OpenAI, continuing its trend of recruiting talent from the AI sector [1][2][3] - The hiring comes shortly after the release of Meta's Llama 4 AI model, which reportedly did not meet CEO Mark Zuckerberg's expectations [2][3] - OpenAI's CEO, Sam Altman, claimed that Meta is offering signing bonuses of up to $100 million, although he noted that their top talent has not been poached [3][4] Group 1: Recruitment Details - The four researchers hired by Meta are significant contributors to OpenAI's major projects, including the development of models from GPT-4 to lightweight versions like o1-mini and o3-mini [5][8] - The researchers include: - Jiahui Yu: Led the development of o3, o4-mini, and GPT-4.1 [6] - Hongyu Ren: Creator of o3-mini and o1-mini, and a core contributor to o1 [6] - Shuchao Bi: Head of OpenAI's post-training multimodal organization [6] - Shengjia Zhao: Key contributor to GPT-4 and o1 [6] Group 2: Impact on OpenAI and Meta - The departure of these researchers may create a short-term talent gap for OpenAI, potentially affecting the development of GPT-5 [8] - Meta aims to enhance its capabilities in model fine-tuning and multimodal alignment, which have been identified as weaknesses in its technology stack [8]
从后训练回到预训练,LLM+RL 的潜力兑现有有机会走更远吗?
机器之心· 2025-06-28 05:22
Core Insights - The article discusses the potential of combining Reinforcement Learning (RL) with Large Language Models (LLMs), particularly focusing on the transition from post-training to pre-training phases, highlighting the challenges and opportunities in this area [2][3]. Group 1: Transition from Post-training to Pre-training - The integration of RL with LLMs is seen as a significant technological advancement, extending applications from post-training to pre-training phases [2]. - LLMs traditionally rely on supervised learning, which requires extensive and accurate human-provided data, making RL a viable alternative to address these limitations [3]. - RL's ability to generate data through model-environment interaction reduces the dependency on high-quality labeled data, thus lowering the requirements for supervision [3][4]. Group 2: Applications and Innovations in RL - Initial applications of RL in LLMs were focused on post-training, with techniques like Reinforcement Learning from Human Feedback (RLHF) being prominent [4]. - Recent advancements, such as Reinforcement Pre-Training (RPT) by researchers from Microsoft and Tsinghua University, have expanded RL's application to the pre-training phase, showing improved performance on certain benchmarks [4][5]. - RPT redefines the next token prediction (NTP) task as a verifiable reasoning task, potentially unlocking RL's capabilities while reducing reliance on labeled data [5]. Group 3: Challenges and Limitations - Despite the promising developments, the known limitations of RL in LLMs are still being uncovered, indicating that while the path appears bright, significant challenges remain [4][6]. - The training data and settings for RPT have yet to be validated across broader text and foundational models, and the computational resource demands for RL training continue to pose challenges [5].