Mamba
Search documents
有300亿美元也未必“再造GPT-4”?NUS尤洋最新长文:拆穿AI增长瓶颈的真相
量子位· 2025-12-31 03:37
Core Viewpoint - The article discusses the growing anxiety surrounding the "AI bottleneck" as the third anniversary of ChatGPT approaches, questioning whether current technological paradigms can effectively utilize increased computational power to develop models significantly stronger than GPT-4 [1][2]. Group 1: Nature of Intelligence and Its Measurement - Intelligence is fundamentally about energy conversion, where AI has transformed electricity into reusable intelligence over the past decade, but the efficiency of this conversion is now under scrutiny [6]. - The essence of intelligence is not explanation but prediction, characterized by the ability to forecast future states and bear the consequences of those predictions [7][10]. - The current models derive their intelligence primarily from the pre-training phase, which consumes the most energy and computation, raising questions about the stability of intelligence growth with continued computational investment [15][20]. Group 2: Computational Paradigms and Their Limitations - The article emphasizes that the real bottleneck is not the cessation of computational growth but rather the diminishing returns in the relationship between computational power and intelligence growth [22][27]. - It challenges the mainstream narrative by suggesting that pre-training, fine-tuning, and reinforcement learning are fundamentally about gradient computation and parameter updates, rather than distinct methodologies [12][11]. - The success of the Transformer architecture is attributed to its compatibility with GPU systems, which has enabled a stable feedback loop between computational growth, model scaling, and capability enhancement [16][18]. Group 3: Future Directions and Exploration - Future AI infrastructure should focus on the overall scalability of parallel computing systems rather than just single-chip performance, with an emphasis on maintaining or improving the ratio of computational to communication costs [24][25]. - Multiple exploration directions are proposed, including higher precision, advanced optimizers, and more scalable architectures or loss functions, all aimed at ensuring that increased computational investments yield proportional intelligence enhancements [25][26]. - The article concludes that as long as more efficient computational organization methods can be found, the upper limits of intelligence are far from being reached [27].
AI大牛张祥雨:Transformer撑不起Agent时代
Di Yi Cai Jing· 2025-12-18 10:52
人脑是"无限流"压缩大师,大模型靠堆层数无法学会人类记忆,到8万Token就不可用了。 "但是很快我们发现了一个巨大的副作用。"张祥雨说,真正的难点是模型的智商会随着文本变化快速下降。"今天的Transformer,不管号称发布出来说支持 到多少Token,基本上到8万个就不可用了。" 这个问题指向了Transformer的一个缺陷,就是它的单向信息流设计。无论输入序列(Context)多长,模型的有效"思考深度"的信息只能从浅层向深层单向 传递,缺乏从深层向浅层的反馈与压缩机制,这与人类大脑"无限流"的记忆机制存在本质差异。 "我今天讲过的每一句话,都是历史上我见过的所有信息的函数。"张祥雨用比喻阐明,"这个函数能用层数固定的网络来表示吗?肯定不可以。"他说人类大 脑能够对从小到大的海量经历进行动态压缩和选择性回溯,而当前Transformer结构无法实现这种类似"无限流"世界的智能处理需求,这制约了AI向具备高度 自主性、能长期持续学习的通用Agent演进。 事实上,当前已经开始有研究者讨论Transformer是否存在根本局限性。就在今年10月,Transformer 架构的共同创造者Llion Jon ...
被拒≠失败!这些高影响力论文都被顶会拒收过
具身智能之心· 2025-12-12 01:22
Core Insights - Waymo has released a deep blog detailing its AI strategy centered around its foundational model, emphasizing the use of distillation methods to create high-efficiency models for onboard operations [1][2] - Jeff Dean highlighted the significance of knowledge distillation, comparing it to the creation of the Gemini Flash model, which showcases the importance of distillation in AI model efficiency [1][2] Historical Context of Rejected Papers - Many foundational technologies in AI, such as optimizers for large models and computer vision techniques, were initially rejected by top conferences, showcasing a historical pattern of oversight in recognizing groundbreaking innovations [6] - Notable figures in AI, including Geoffrey Hinton and Yann LeCun, have faced rejection for their pioneering work, which was later recognized as transformative [6] Case Studies of Rejected Innovations - LSTM, a milestone for sequence data processing, was rejected by NIPS in 1996 but later became crucial in speech recognition and machine translation, highlighting the delayed recognition of its value [7][10] - SIFT, a dominant algorithm in computer vision, faced rejection from ICCV and CVPR due to its perceived complexity, yet proved to be vital in real-world image processing [11][13] - Dropout, a key regularization method for deep neural networks, was initially rejected for its radical approach but later became essential in training deep networks effectively [17][19] - Word2Vec, despite being rejected at ICLR, became a cornerstone in NLP due to its efficiency and practical application, eventually receiving recognition for its impact [20][24] - YOLO transformed object detection by prioritizing speed over precision, facing rejection for its perceived shortcomings but later becoming a widely adopted framework in the industry [28][30] Reflection on Peer Review Limitations - The peer review system often struggles to recognize disruptive innovations, leading to a systematic cognitive lag in evaluating groundbreaking research [40][41] - The tendency to equate mathematical complexity with research contribution can hinder the acceptance of simpler yet effective methods [41] - Historical examples illustrate that the true measure of a research's impact is not determined by initial peer review outcomes but by its long-term relevance and problem-solving capabilities [43][47]
被拒≠失败!这些高影响力论文都被顶会拒收过
机器之心· 2025-12-11 02:47
Core Insights - Waymo has released a deep blog detailing its AI strategy centered around its foundational model, emphasizing the use of distillation methods to create efficient models for onboard operations [1] - Jeff Dean highlighted the significance of knowledge distillation in AI, reflecting on its initial rejection by NeurIPS 2014, which underestimated its potential impact [3][4] Group 1: Historical Context of Rejected Papers - Many foundational technologies in AI, such as optimizers for large models and computer vision techniques, were initially rejected by top conferences, showcasing a systemic lag in recognizing groundbreaking innovations [6] - Notable figures in AI, including Geoffrey Hinton and Yann LeCun, faced rejection for their pioneering work, often due to reasons that seem absurd in hindsight, such as claims of lacking theoretical basis or being overly simplistic [6] Group 2: Specific Case Studies of Rejected Innovations - LSTM, a milestone in handling sequential data, was rejected by NIPS in 1996 during a period when statistical methods were favored, only to later dominate fields like speech recognition [8] - The SIFT algorithm, which ruled the computer vision domain for 15 years, faced rejection from ICCV and CVPR due to its perceived complexity and lack of elegance, ultimately proving the value of robust engineering design [11] - Dropout, a key regularization method for deep neural networks, was rejected by NIPS in 2012 for being too radical, yet it became crucial for the success of models like AlexNet [17] - Word2Vec, despite its revolutionary impact on NLP, received a strong rejection at ICLR 2013 due to perceived lack of scientific rigor, but it quickly became a cornerstone of text representation [19][20] Group 3: Reflection on Peer Review Limitations - The peer review system often struggles to recognize disruptive innovations, leading to a "simplicity trap" where reviewers equate mathematical complexity with research contribution [40] - Reviewers tend to maintain existing paradigms, which can hinder the acceptance of novel ideas that challenge traditional metrics of success [40] - The demand for rigorous theoretical proof in an experimental field like deep learning can stifle practical breakthroughs, as seen with the initial skepticism towards methods like Adam optimizer [40] Group 4: Broader Implications - The experiences of rejected papers illustrate the nonlinear nature of scientific progress, highlighting that peer review, while essential, is limited by human cognitive biases [41] - Historical anecdotes, such as Einstein's rejection of a paper on gravitational waves, emphasize that the true measure of a research's impact is its long-term relevance rather than immediate acceptance [42][44]
Kimi开源新线性注意力架构,首次超越全注意力模型,推理速度暴涨6倍
量子位· 2025-10-31 06:27
Core Insights - The era of Transformers is being redefined with the introduction of the Kimi Linear architecture, which surpasses traditional attention models under the same training conditions [2][10]. Group 1: Kimi Linear Architecture - Kimi Linear employs a novel attention mechanism that reduces the KV cache requirement by 75% and achieves up to 6 times faster inference in long-context tasks [4][26]. - The architecture introduces Kimi Delta Attention (KDA), which allows for fine-grained control over memory retention, enabling the model to discard redundant information while preserving important data [12][10]. - KDA's state update mechanism is based on an improved Delta Rule, ensuring stability even with sequences of millions of tokens, preventing gradient explosion or vanishing [13][14]. Group 2: Performance and Efficiency - The model utilizes a 3:1 mixed layer design, combining three layers of linear attention followed by one layer of full attention, balancing global semantic modeling with resource efficiency [15]. - Kimi Linear has demonstrated superior performance across multiple benchmark tests, such as MMLU and BBH, outperforming traditional Transformers while maintaining accuracy in mathematical reasoning and code generation tasks [22][26]. - The architecture's deployment is seamless with existing vLLM inference frameworks, allowing for easy upgrades of Transformer-based systems to Kimi Linear [21]. Group 3: Industry Trends - The dominance of Transformers is being challenged, with alternative models like state space models (SSM) showing potential for efficient computation and long sequence modeling [28][30]. - Companies like Apple are exploring SSM architectures for their energy efficiency and lower latency, indicating a shift away from traditional Transformer reliance [30]. - The emergence of Kimi Linear signifies a move towards diverse innovations in AI architecture, suggesting a departure from the conventional Transformer path [32].
苹果AI选Mamba:Agent任务比Transformer更好
量子位· 2025-10-21 05:41
Core Viewpoint - The article discusses the advancements in AI models, particularly focusing on the Mamba model, which shows potential to surpass Transformer models in efficiency and generalization capabilities for long tasks and multi-interaction agent tasks [1][10]. Group 1: Transformer Limitations - Transformer models, while intelligent, face significant computational costs that grow quadratically with the length of the input sequence, making them inefficient for long documents [4][5]. - For instance, processing 1,000 words requires handling 1 million word pair relationships, and for documents with tens of thousands of words, the computational burden can reach billions [5]. Group 2: Mamba Model Advantages - Mamba, as a state space model (SSM), utilizes a lightweight design that does not rely on global attention mechanisms, instead maintaining an updated internal state to understand input information [7][10]. - This approach results in three significant advantages: linear growth in computational requirements with sequence length, support for streaming processing, and stable memory usage that does not increase significantly with longer sequences [13]. Group 3: Performance Enhancements with Tools - The introduction of external tools enhances Mamba's performance, allowing it to handle complex tasks more effectively. For example, in multi-digit addition tasks, Mamba with pointer tools can achieve near 100% accuracy after training on 5-digit addition, while Transformers struggle with 20-digit tasks [15]. - In code debugging tasks, Mamba's ability to simulate interactive debugging processes leads to significantly higher accuracy compared to Transformers when faced with complex codebases [15]. - Mamba's combination with external tools addresses its memory limitations, resulting in improved efficiency and performance in agent-based tasks [16][18].
Flash Attention作者最新播客:英伟达GPU统治三年内将终结
量子位· 2025-09-29 04:57
Group 1 - The core argument is that Nvidia's dominance in the GPU market will face increasing competition within the next 2-3 years as specialized chips for different workloads emerge, leading to a more diversified ecosystem [6][9][23] - Tri Dao emphasizes that the architecture for AI models, particularly the Transformer, is stabilizing, but there are still ongoing changes and challenges in chip design and workload adaptation [11][12][21] - The future of AI workloads will include three main types: traditional chatbots, ultra-low latency scenarios, and large-scale batch processing, which will require tailored optimizations from hardware vendors [24][96] Group 2 - The cost of inference has decreased by approximately 100 times since the launch of ChatGPT, driven by improvements in model efficiency and inference optimization techniques [73][75][90] - Techniques such as model quantization and collaborative design between model architecture and hardware have significantly contributed to this cost reduction [82][84][88] - There is still an estimated potential for a further 10-fold improvement in inference optimization, particularly through specialized hardware and model advancements [90][93][95] Group 3 - The AI hardware landscape is expected to diversify as companies like Cerebras, Grok, and SambaNova introduce solutions that emphasize low-latency inference and high throughput for various applications [23][24][96] - The emergence of specialized AI inference providers will lead to different trade-offs, with some focusing on broad coverage while others aim for excellence in specific scenarios [96][97] - The evolution of AI workloads will continue to drive demand for innovative solutions, particularly in real-time video generation and agentic applications that require seamless integration with human tools [117][115][120]
「Tokens是胡扯」,Mamba作者抛出颠覆性观点,揭露Transformer深层缺陷
机器之心· 2025-07-09 09:52
Core Viewpoint - The article discusses the trade-offs between State Space Models (SSM) and Transformers, arguing that tokenization is a limitation that SSM can overcome, leading to better computational efficiency and modeling capabilities [1][3][61]. Group 1: State Space Models (SSM) - SSM is defined as a modern version of recurrent neural networks (RNN) with key features that allow it to match the language modeling performance of Transformers [8][10]. - A significant characteristic of SSM is that its hidden state dimension is greater than the input and output dimensions, allowing for better context storage [9][10]. - The model's state update function must be expressive enough to accurately encode and retrieve necessary information, which is achieved through dynamic transfer matrices in selective SSM [11][12]. - Mamba, a specific SSM, integrates parallelization and memory management techniques to enhance computational efficiency [13][14]. - The article highlights that SSMs can outperform Transformers in language modeling tasks when computational resources are matched [53][56]. Group 2: Transformers - Transformers excel in tasks requiring fine-grained operations on individual tokens, but they suffer from quadratic complexity, limiting their efficiency [82][86]. - The article argues that Transformers have an inductive bias that affects their modeling capabilities, making them sensitive to the resolution and semantic content of the data [83][85]. - Despite their strengths, Transformers are not the ultimate solution for all modeling tasks, and there is still significant work to be done in the field [89]. Group 3: Tokenization - Tokenization is a critical step in language modeling, but it introduces limitations in understanding language details [39][40]. - The article posits that removing tokenization could lead to better model performance and aligns with the essence of deep learning, which aims to minimize manual feature engineering [44][45]. - The author suggests that without tokenization, models could learn more effective patterns directly from raw data, enhancing their capabilities [46][52].
Mamba一作预告新架构!长文论述Transformer≠最终解法
量子位· 2025-07-09 04:57
Core Viewpoint - The article discusses the trade-offs between two mainstream sequence models: State Space Models (SSMs) and Transformer models, highlighting the strengths and weaknesses of each approach [1][3]. Summary by Sections Introduction to Mamba and SSMs - Mamba is a typical SSM that builds on a modern structured SSM suitable for deep learning, outperforming similarly sized Transformers in language tasks [2]. - The author consolidates insights from previous talks into a comprehensive article, hinting at a significant upcoming advancement in architecture [3][4]. Attention Mechanism and Its Limitations - The article challenges the common belief that the high computational cost of models like ChatGPT is solely due to the quadratic complexity of the attention mechanism in Transformers [5][6]. - A new architecture is expected to be compatible with Transformers, suggesting a shift in understanding the limitations of attention mechanisms [7][8]. Comparison of SSMs and Transformers - SSMs are likened to the human brain, summarizing past information into a fixed-size hidden state, making them more efficient for processing long sequences [15][16]. - SSMs have advantages in handling unstructured data and exhibit linear computational costs with respect to sequence length, making them suitable for resource-constrained environments [16]. Key Elements of Mamba's Success - Mamba's effectiveness is attributed to three key factors: state size, state expressivity, and training efficiency [17][20]. - SSMs allow for larger hidden states, enhancing information storage compared to traditional RNNs [18]. - Mamba introduces selective SSMs to improve state expressivity, akin to the gating mechanisms in classic RNNs [19]. - Training efficiency is achieved through careful parameterization and parallel scanning algorithms [21]. Limitations of SSMs - SSMs lack precise recall and retrieval capabilities for past information, which is a strength of Transformer models [22]. Transformer Model Characteristics - Transformers function like a database, storing every piece of information in a KV cache, allowing for precise memory and token-level operations [23][25]. - They excel in processing well-defined tokenized data but suffer from high computational costs and dependency on high-quality data [26][27]. Tokenization Debate - The author argues against the necessity of tokenization, stating it contradicts the end-to-end learning principle of deep learning and complicates multilingual and multimodal applications [28][30]. - Evidence suggests that SSMs outperform Transformers on raw data, emphasizing Transformers' weaknesses with non-semantic token data [32]. Conclusion on SSMs vs. Transformers - Both SSMs and Transformers have their unique strengths and weaknesses, and a hybrid approach could yield better performance [33][35]. - Research indicates that a combination of SSM and attention layers could enhance model capabilities, with an optimal ratio of 3:1 to 10:1 [37]. - The future direction may involve developing models that can directly process raw data, leveraging the advantages of both architectures [40].
Transformer死角,只需500步后训练,循环模型突破256k长度泛化极限
机器之心· 2025-07-08 04:09
Core Insights - The article discusses the advantages of linear recurrent models, such as Mamba, and linear attention mechanisms in handling long sequences, which is crucial for long-context reasoning tasks [1][2] - It highlights the performance improvements of recurrent models over time, indicating that they can now compete with Transformers in various tasks, despite previous limitations [3] - A significant finding is that recurrent models struggle with generalization beyond training lengths, leading to performance drops when faced with longer sequences [4][6] Group 1 - The article presents a solution to the generalization issue in recurrent models through simple training interventions, allowing them to generalize to sequences up to 256k in length with just 500 additional training steps [7] - The research emphasizes that recurrent models possess untapped potential rather than inherent flaws [7][8] - The authors propose the "Unexplored States Hypothesis" to explain why recurrent models fail to generalize in length, indicating that they only learn from a limited subset of possible states during training [13][14] Group 2 - The article outlines four training interventions to improve length generalization by altering the initial state of the model [19] - These interventions include Random Noise, Fitted Noise, State Passing, and Truncated Backpropagation Through Time (TBTT), each designed to expose the model to a broader range of state distributions [20][19] - The findings reveal that State Passing and TBTT mechanisms effectively enable length generalization, achieving results with only 0.02% of the original pre-training budget [23][24] Group 3 - The article discusses the performance of these interventions in various long-context tasks, demonstrating their ability to enhance length generalization [31] - Specific tasks mentioned include the BABILong benchmark, password retrieval, and synthetic copying tasks, where the interventions significantly improved model performance [32][35][39] - The results indicate that models trained with these interventions can effectively utilize relationships between tokens beyond the training context length [36][39] Group 4 - The article introduces the concept of "Effective Remembrance" to measure how well a model retains information from previous tokens, aiming for models to focus on recent context rather than distant tokens [44][50] - It shows that State Passing improves effective memory, allowing models to prioritize recent tokens in their predictions [51][52] - This adjustment is crucial for text modeling, ensuring that earlier tokens do not disproportionately influence the model's output [52]