Workflow
上下文压缩
icon
Search documents
真·开外挂!MIT新研究:架构0改动,让大模型解锁千万级上下文
量子位· 2026-01-19 03:48
Core Insights - The article discusses a new method called Recursive Language Model (RLM) developed by MIT CSAIL for processing long texts, addressing the issue of context decay in large models [1][5][11] - RLM allows top models like GPT-5 and Qwen-3 to handle super long texts with millions of tokens without modifying their architecture [2][23] Summary by Sections Context Decay Issue - Large models struggle with context decay, where the performance declines as the text length increases, leading to a loss of memory for earlier information [5][6] - Current mainstream solutions include context compression, retrieval-augmented generation (RAG), and architectural optimizations [7][10] RLM Methodology - RLM outsources context processing to an interactive Python environment, enabling models to programmatically break down tasks and process them as needed [4][13][15] - The model initiates a Python REPL environment, storing long prompts as string variables and performing operations like keyword filtering and logical decomposition [14] Performance Metrics - RLM has demonstrated the ability to effectively handle over 10 million tokens, significantly surpassing the native context window of models like GPT-5 [16] - In complex long text tasks, RLM showed substantial improvements, achieving F1 scores of 58.00% and 23.11% for GPT-5 and Qwen-3, respectively, in the OOLONG-Pairs task [16] - For the BrowseComp-Plus multi-document reasoning task, RLM (GPT-5) achieved a correct rate of 91.33%, outperforming other long text processing methods [16] Cost Efficiency - RLM's cost at the 50th percentile is competitive with other long text processing solutions, indicating a favorable cost-performance ratio in most regular task scenarios [19] - However, at the 95th percentile, RLM's costs can spike due to its dynamic reasoning process, which increases API call frequency based on task complexity [20][21]
深度|OpenAI产品经理谈Codex爆发式增长背后的AI协作:实现AGI级生产力的真正瓶颈是人类的打字速度!
Z Potentials· 2026-01-19 03:02
Core Insights - Codex, a powerful coding agent developed by OpenAI, has experienced a 20-fold growth since the release of ChatGPT5 in August 2023, processing trillions of characters weekly [3][19]. - The primary goal of Codex is to enhance human productivity by enabling proactive task completion rather than merely responding to commands [9][17]. - OpenAI's organizational structure emphasizes a bottom-up approach, allowing for flexibility and rapid experimentation, which has been crucial for Codex's development [12][14]. Group 1: Codex's Development and Growth - Codex has become a core tool for software engineering teams, functioning as an initial team member capable of writing, testing, and deploying code [15][16]. - The product has seen explosive growth, with usage increasing over 10 times since August, now reaching 20 times, and it is the most utilized code generation model [19][20]. - The integration of product and research teams has facilitated collaborative iterations, leading to more effective experiments and product enhancements [19][26]. Group 2: Proactive Collaboration and User Interaction - Codex aims to function as a proactive collaborator, akin to a new intern, participating in the entire software development lifecycle [16][17]. - The focus is on creating a seamless integration into developers' workflows, allowing Codex to assist without requiring constant user prompts [18][22]. - The feedback loop established through local interactions enhances user experience and encourages gradual adaptation to AI-assisted development [22][23]. Group 3: Future Vision and Market Position - The vision for Codex extends beyond code writing to include capabilities such as scheduling and task management, positioning it as a comprehensive AI assistant [28][29]. - OpenAI is exploring the potential of a "chatter-driven development" model, where communication and collaboration drive coding processes rather than rigid specifications [38][39]. - The company recognizes the need for Codex to adapt to various user environments and preferences, ensuring it remains a valuable tool for diverse teams [25][33].
10倍压缩率、97%解码精度!DeepSeek开源新模型 为何赢得海内外关注
Xin Lang Cai Jing· 2025-10-21 23:26
Core Insights - DeepSeek has open-sourced a new model called DeepSeek-OCR, which utilizes visual patterns for context compression, aiming to reduce computational costs associated with large models [1][3][6] Model Architecture - DeepSeek-OCR consists of two main components: DeepEncoder, a visual encoder designed for high compression and high-resolution document processing, and DeepSeek3B-MoE, a lightweight language decoder [3][4] - The DeepEncoder integrates two established visual model architectures: SAM (Segment Anything Model) for local detail processing and CLIP (Contrastive Language–Image Pre-training) for capturing global knowledge [4][6] Performance and Capabilities - The model demonstrates strong "deep parsing" abilities, capable of recognizing complex visual elements such as charts and chemical formulas, thus expanding its application in fields like finance, research, and education [6][7] - Experimental results indicate that when the number of text tokens is within ten times that of visual tokens (compression ratio <10×), the model achieves 97% OCR accuracy, maintaining around 60% accuracy even at a 20× compression ratio [6][7][8] Industry Reception - The model has received widespread acclaim from tech media and industry experts, with notable figures like Andrej Karpathy praising its innovative approach to using pixels as input for large language models [3][4] - Elon Musk commented on the long-term potential of AI models primarily utilizing photon-based inputs, indicating a shift in how data may be processed in the future [4] Practical Applications - DeepSeek-OCR is positioned as a highly practical model capable of generating large-scale pre-training data, with a single A100-40G GPU able to produce over 200,000 pages of training data daily [7][8] - The model's unique approach allows it to compress a 1000-word article into just 100 visual tokens, showcasing its efficiency in processing and recognizing text [8]
Multi-Agent 协作兴起,RAG 注定只是过渡方案?
机器之心· 2025-07-19 01:31
Group 1: Core Insights - The AI memory system is evolving from Retrieval-Augmented Generation (RAG) to a multi-level state dynamic evolution, enabling agents to retain experiences and manage memory dynamically [1][2]. - Various AI memory projects have emerged, transitioning from short-term responses to long-term interactions, thereby enhancing agents with "sustained experience" capabilities [2][3]. - MemoryOS introduces a hierarchical storage architecture that categorizes dialogue memory into short-term, medium-term, and long-term layers, facilitating dynamic migration and updates through FIFO and segmented paging mechanisms [2][3]. - MemGPT adopts an operating system approach, treating fixed-length context as "main memory" and utilizing paging to manage large document analysis and multi-turn conversations [2][3]. - Commercial platforms like ChatGPT Memory operate using RAG, retrieving user-relevant information through vector indexing to enhance memory of user preferences and historical data [2][3]. Group 2: Challenges Facing AI Memory - AI memory systems face several challenges, including static storage limitations, chaotic multi-modal and multi-agent collaboration, retrieval expansion conflicts, and weak privacy control [4][5]. - The need for hierarchical and state filtering mechanisms is critical, as well as the ability to manage enterprise-level multi-tasking and permissions effectively [4][5]. - These challenges not only test the flexibility of the technical architecture but also drive the evolution of memory systems towards being more intelligent, secure, and efficient [4][5].