Workflow
知识蒸馏
icon
Search documents
GLaD:知识蒸馏将3D几何先验注入VLA模型,任务成功率突破94%
具身智能之心· 2025-12-12 01:22
点击下方 卡片 ,关注" 具身智能 之心 "公众号 作者丨 Minghao Guo等 编辑丨具身智能之心 本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 一、研究背景与核心动机 视觉-语言-动作(VLA)模型是具身智能领域的关键技术,能够让机器人直接从视觉观测和自然语言指令中生成控制动作。现有VLA模型大多依赖CLIP、SigLIP等 2D视觉编码器,这类编码器擅长捕捉图像与文本的语义对应关系,却无法编码3D空间信息(如深度、物体位姿、空间关系)。 这种缺陷会导致模型在操作任务中出现错误的注意力分配,如figure1所示:在"将桌布从桌角移到桌边"和"拾取盘子与ramekin之间的黑碗并放到盘子上"任务中,传 统VLA模型会错误关注无关区域,无法精准定位任务相关物体,进而影响操作任务的完成精度。 为解决这一问题,研究团队提出GLaD框架,核心思路是通过知识蒸馏将3D几何先验注入VLA模型,使其同时具备语义理解和空间推理能力,且无需依赖额外的深 度传感器或3D标注。 ...
被拒≠失败!这些高影响力论文都被顶会拒收过
具身智能之心· 2025-12-12 01:22
Core Insights - Waymo has released a deep blog detailing its AI strategy centered around its foundational model, emphasizing the use of distillation methods to create high-efficiency models for onboard operations [1][2] - Jeff Dean highlighted the significance of knowledge distillation, comparing it to the creation of the Gemini Flash model, which showcases the importance of distillation in AI model efficiency [1][2] Historical Context of Rejected Papers - Many foundational technologies in AI, such as optimizers for large models and computer vision techniques, were initially rejected by top conferences, showcasing a historical pattern of oversight in recognizing groundbreaking innovations [6] - Notable figures in AI, including Geoffrey Hinton and Yann LeCun, have faced rejection for their pioneering work, which was later recognized as transformative [6] Case Studies of Rejected Innovations - LSTM, a milestone for sequence data processing, was rejected by NIPS in 1996 but later became crucial in speech recognition and machine translation, highlighting the delayed recognition of its value [7][10] - SIFT, a dominant algorithm in computer vision, faced rejection from ICCV and CVPR due to its perceived complexity, yet proved to be vital in real-world image processing [11][13] - Dropout, a key regularization method for deep neural networks, was initially rejected for its radical approach but later became essential in training deep networks effectively [17][19] - Word2Vec, despite being rejected at ICLR, became a cornerstone in NLP due to its efficiency and practical application, eventually receiving recognition for its impact [20][24] - YOLO transformed object detection by prioritizing speed over precision, facing rejection for its perceived shortcomings but later becoming a widely adopted framework in the industry [28][30] Reflection on Peer Review Limitations - The peer review system often struggles to recognize disruptive innovations, leading to a systematic cognitive lag in evaluating groundbreaking research [40][41] - The tendency to equate mathematical complexity with research contribution can hinder the acceptance of simpler yet effective methods [41] - Historical examples illustrate that the true measure of a research's impact is not determined by initial peer review outcomes but by its long-term relevance and problem-solving capabilities [43][47]
被拒≠失败!这些高影响力论文都被顶会拒收过
机器之心· 2025-12-11 02:47
Core Insights - Waymo has released a deep blog detailing its AI strategy centered around its foundational model, emphasizing the use of distillation methods to create efficient models for onboard operations [1] - Jeff Dean highlighted the significance of knowledge distillation in AI, reflecting on its initial rejection by NeurIPS 2014, which underestimated its potential impact [3][4] Group 1: Historical Context of Rejected Papers - Many foundational technologies in AI, such as optimizers for large models and computer vision techniques, were initially rejected by top conferences, showcasing a systemic lag in recognizing groundbreaking innovations [6] - Notable figures in AI, including Geoffrey Hinton and Yann LeCun, faced rejection for their pioneering work, often due to reasons that seem absurd in hindsight, such as claims of lacking theoretical basis or being overly simplistic [6] Group 2: Specific Case Studies of Rejected Innovations - LSTM, a milestone in handling sequential data, was rejected by NIPS in 1996 during a period when statistical methods were favored, only to later dominate fields like speech recognition [8] - The SIFT algorithm, which ruled the computer vision domain for 15 years, faced rejection from ICCV and CVPR due to its perceived complexity and lack of elegance, ultimately proving the value of robust engineering design [11] - Dropout, a key regularization method for deep neural networks, was rejected by NIPS in 2012 for being too radical, yet it became crucial for the success of models like AlexNet [17] - Word2Vec, despite its revolutionary impact on NLP, received a strong rejection at ICLR 2013 due to perceived lack of scientific rigor, but it quickly became a cornerstone of text representation [19][20] Group 3: Reflection on Peer Review Limitations - The peer review system often struggles to recognize disruptive innovations, leading to a "simplicity trap" where reviewers equate mathematical complexity with research contribution [40] - Reviewers tend to maintain existing paradigms, which can hinder the acceptance of novel ideas that challenge traditional metrics of success [40] - The demand for rigorous theoretical proof in an experimental field like deep learning can stifle practical breakthroughs, as seen with the initial skepticism towards methods like Adam optimizer [40] Group 4: Broader Implications - The experiences of rejected papers illustrate the nonlinear nature of scientific progress, highlighting that peer review, while essential, is limited by human cognitive biases [41] - Historical anecdotes, such as Einstein's rejection of a paper on gravitational waves, emphasize that the true measure of a research's impact is its long-term relevance rather than immediate acceptance [42][44]
当千亿参数撞上5毫米芯片
Tai Mei Ti A P P· 2025-12-10 03:19
Core Insights - The global tech industry is experiencing a shift from cloud-based AI to edge AI, driven by the limitations of cloud dependency and the need for real-time processing in critical applications [1][4][18] - The current trend emphasizes the development of smaller, more efficient AI models that can operate independently on edge devices, rather than relying on large cloud models [16][18] Group 1: Challenges of Cloud Dependency - Cloud-based AI systems face significant latency issues, which can be detrimental in time-sensitive applications like autonomous driving [2][4] - Privacy concerns arise from the need to transmit sensitive data to cloud servers, making edge computing a more attractive option for users [2][4] Group 2: The Shift to Edge AI - The industry is moving towards a "cloud-edge-end" architecture, where complex tasks are handled by cloud models while real-time tasks are managed by edge devices [7][18] - Edge AI must overcome the "impossible triangle" of high intelligence, low latency, and low power consumption, necessitating innovative solutions [7][8] Group 3: Techniques for Edge AI Implementation - Knowledge distillation is a key technique that allows smaller models to retain the intelligence of larger models by learning essential features and reasoning paths [8][10] - Extreme quantization reduces model size and increases speed by compressing model weights, allowing for efficient processing on edge devices [10][11] - Structural pruning eliminates redundant connections in neural networks, further optimizing performance for edge applications [10][11] Group 4: Hardware Innovations - The "memory wall" issue in traditional architectures leads to inefficiencies, prompting the development of specialized architectures that integrate storage and computation [11][13] - Companies are exploring dedicated chip designs that optimize performance for specific AI tasks, enhancing efficiency in edge computing [13][14] Group 5: Industry Evolution - The focus is shifting from general-purpose AI models to specialized models that excel in specific applications, improving reliability and performance [15][16] - The Chinese AI industry is collectively recognizing the importance of practical applications over sheer model size, leading to a more grounded approach to AI development [16][18]
微软BitDistill将LLM压缩到1.58比特:10倍内存节省、2.65倍CPU推理加速
机器之心· 2025-10-20 07:48
Core Insights - The article discusses the challenges of deploying large language models (LLMs) efficiently in downstream applications, particularly on resource-constrained devices like smartphones, due to high memory and computational costs [1][7] - A new approach called BitDistill is introduced, which aims to compress existing pre-trained LLMs into a 1.58-bit BitNet model while minimizing performance loss and training costs [4][19] Group 1: Challenges and Solutions - LLMs face significant deployment challenges as their scale increases, leading to instability in training and performance degradation when quantized to lower bit representations [2][10] - The introduction of extreme low-bit LLMs, such as BitNet, aims to reduce memory usage and accelerate inference, but achieving comparable accuracy to high-precision models requires extensive pre-training [1][4] Group 2: BitDistill Framework - BitDistill consists of three key stages: model refinement, continuous pre-training, and distillation-based fine-tuning [8][12] - The first stage addresses activation variance issues in low-bit models by introducing additional normalization layers to stabilize the optimization process [9][30] - The second stage involves continuous training with a small amount of pre-training data to adapt the model to the 1.58-bit representation before fine-tuning on specific tasks [11][32] - The third stage employs knowledge distillation techniques to align the performance of the quantized model with that of the full-precision teacher model [13][27] Group 3: Experimental Results - BitDistill demonstrates excellent scalability, achieving performance comparable to full-precision baselines while providing significant improvements in inference speed (approximately 2x) and memory usage (nearly 10x reduction) [19][20] - Experiments on text classification and summarization tasks show that the 1.58-bit BitDistill model maintains high accuracy and quality, with results indicating a strong performance across various model sizes [16][21] - The method exhibits cross-architecture generality, maintaining stable performance even when using different pre-trained models [22] Group 4: Ablation Studies - Ablation studies indicate that each stage of the BitDistill process is crucial for achieving the desired balance between efficiency and accuracy, with the removal of any stage leading to significant performance drops [25][26] - The combination of logits and attention distillation techniques yields the best results, highlighting the importance of using multiple strategies to mitigate quantization challenges [27][29]
自动驾驶基础模型应该以能力为导向,而不仅是局限于方法本身
自动驾驶之心· 2025-09-16 23:33
Core Insights - The article discusses the transformative impact of foundational models on the autonomous driving perception domain, shifting from task-specific deep learning models to versatile architectures trained on vast and diverse datasets [2][4] - It introduces a new classification framework focusing on four core capabilities essential for robust performance in dynamic driving environments: general knowledge, spatial understanding, multi-sensor robustness, and temporal reasoning [2][5] Group 1: Introduction and Background - Autonomous driving perception is crucial for enabling vehicles to interpret their surroundings in real-time, involving key tasks such as object detection, semantic segmentation, and tracking [3] - Traditional models, designed for specific tasks, exhibit limited scalability and poor generalization, particularly in "long-tail scenarios" where rare but critical events occur [3][4] Group 2: Foundational Models - Foundational models, developed through self-supervised or unsupervised learning strategies, leverage large-scale datasets to learn general representations applicable across various downstream tasks [4][5] - These models demonstrate significant advantages in autonomous driving due to their inherent generalization capabilities, efficient transfer learning, and reduced reliance on labeled datasets [4][5] Group 3: Key Capabilities - The four key dimensions for designing foundational models tailored for autonomous driving perception are: 1. General Knowledge: Ability to adapt to a wide range of driving scenarios, including rare situations [5][6] 2. Spatial Understanding: Deep comprehension of 3D spatial structures and relationships [5][6] 3. Multi-Sensor Robustness: Maintaining high performance under varying environmental conditions and sensor failures [5][6] 4. Temporal Reasoning: Capturing temporal dependencies and predicting future states of the environment [6] Group 4: Integration and Challenges - The article outlines three mechanisms for integrating foundational models into autonomous driving technology stacks: feature-level distillation, pseudo-label supervision, and direct integration [37][40] - It highlights the challenges faced in deploying these models, including the need for effective domain adaptation, addressing hallucination risks, and ensuring efficiency in real-time applications [58][61] Group 5: Future Directions - The article emphasizes the importance of advancing research in foundational models to enhance their safety and effectiveness in autonomous driving systems, addressing current limitations and exploring new methodologies [2][5][58]
沉寂一个月,openPangu性能飙升8%!华为1B开源模型来了
机器之心· 2025-09-05 04:31
Core Viewpoint - Huawei's Pangu Embedded-1B model represents a significant advancement in edge AI, enabling powerful AI capabilities on resource-constrained devices, thus paving the way for intelligent upgrades in various industries [1][5]. Group 1: Model Performance and Efficiency - The openPangu Embedded-1B model, with 1 billion parameters, achieves a new state-of-the-art (SOTA) record in performance and efficiency, demonstrating that smaller models can deliver substantial capabilities [2][3]. - The model's overall average score reached 63.90, surpassing similar models and matching larger models like Qwen3-1.7B, showcasing its parameter efficiency [3][4]. - In mathematical reasoning, the model scored 82.76% on the GSM8K benchmark and 81.83% on the MATH dataset, significantly outperforming its peers [3][4]. Group 2: Technical Innovations - The model employs a soft-hardware collaborative design, optimizing its architecture to align with the characteristics of Ascend hardware, ensuring efficient resource utilization [9][10]. - A two-stage curriculum learning approach is utilized to enhance the model's reasoning capabilities, simulating a human-like learning process [15][16]. - The introduction of offline On-Policy knowledge distillation allows for a more flexible and effective training process, improving the model's accuracy and generalization [18][19]. Group 3: Reinforcement Learning and Future Directions - The model incorporates a multi-source reward reinforcement learning mechanism, enhancing its performance through targeted feedback based on task complexity [22][25]. - Future developments aim to integrate fast and slow thinking processes within a single model, allowing for adaptive responses based on problem difficulty, thus improving both speed and accuracy [29][30].
闭环碰撞率爆降50%!DistillDrive:异构多模态蒸馏端到端新方案
自动驾驶之心· 2025-08-11 23:33
Core Insights - The article discusses the development of DistillDrive, an end-to-end autonomous driving model that significantly reduces collision rates by 50% and improves closed-loop performance by 3 percentage points compared to baseline models [2][7]. Group 1: Model Overview - DistillDrive utilizes a knowledge distillation framework to enhance multi-modal motion feature learning, addressing the limitations of existing models that overly focus on ego-vehicle status [2][6]. - The model incorporates a structured scene representation as a teacher model, leveraging diverse planning instances for multi-objective learning [2][6]. - Reinforcement learning is introduced to optimize the mapping from states to decisions, while generative modeling is used to construct planning-oriented instances [2][6]. Group 2: Experimental Validation - The model was validated on the nuScenes and NAVSIM datasets, demonstrating a 50% reduction in collision rates and a 3-point improvement in performance metrics [7][37]. - The nuScenes dataset consists of 1,000 driving scenes, while the NAVSIM dataset enhances perception capabilities with high-quality annotations and complex scenarios [33][36]. Group 3: Performance Metrics - DistillDrive outperformed existing models, achieving lower collision rates and reduced L2 error compared to SparseDrive, indicating the effectiveness of diversified imitation learning [37][38]. - The teacher model exhibited superior performance, confirming the effectiveness of reinforcement learning in optimizing state space [37][39]. Group 4: Future Directions - Future work aims to integrate world models with language models to further enhance planning performance and employ more effective reinforcement learning methods [54][55].
端侧大模型20250801
2025-08-05 03:18
Summary of Conference Call Records Industry Overview - The discussion primarily revolves around the advancements in **edge AI models** and their comparison with **cloud-based large models**. The focus is on the hardware improvements, particularly in **NPU (Neural Processing Unit)** technology, which enhances the efficiency of edge devices like smartphones and PCs [1][2][3]. Key Points and Arguments 1. **Hardware Advancements**: The improvement in edge AI is significantly driven by advancements in hardware, particularly in chips like Apple's A18 and Qualcomm's Snapdragon 8 Gen 2, which integrate more efficient NPUs alongside traditional CPU and GPU [1][3]. 2. **Model Development**: There is a notable shift towards **multi-modal AI models** that incorporate various functionalities such as programming and mathematical reasoning, indicating a broader application of AI technologies [2][3]. 3. **Performance Metrics**: Current edge AI chips can run models with up to **100 billion parameters**, showcasing their capability to handle complex computations efficiently [3][4]. 4. **Architectural Optimization**: The development of edge models relies heavily on architectural optimizations, such as **Mixture of Experts (MoE)** and **grouped attention mechanisms**, which enhance the model's efficiency and reduce memory consumption [4][5][6]. 5. **Knowledge Density Improvement**: Techniques like **model quantization** are employed to reduce computational load by converting high-precision floating-point numbers into lower-precision formats, allowing for more efficient processing [8][9]. 6. **Dynamic Pruning**: The concept of dynamic pruning is introduced, where parts of the model that do not contribute to performance are removed during training, enhancing flexibility and efficiency [11][12][13]. 7. **Competitive Landscape**: The call highlights the competitive dynamics between domestic and international players in the edge AI space, with companies like **Meta**, **Microsoft**, and **Google** leading in model development, while domestic firms are catching up by focusing on specific application scenarios [14][15][16][17]. 8. **Market Positioning**: Major companies are integrating their edge models into various devices, such as smartphones and PCs, to enhance user experience and drive commercial viability [17][18]. 9. **Domestic Developments**: Domestic companies like **Tencent**, **Alibaba**, and **ByteDance** are developing their edge models, with some achieving competitive performance in niche areas, indicating a growing capability in the local market [22][26][27]. Other Important Insights - The call emphasizes the importance of **data privacy** and the need for edge models to address these concerns while maintaining performance [14]. - The discussion also touches on the **commercialization** of AI technologies, with companies exploring various monetization strategies for their edge AI solutions [17][18]. - The potential for edge AI to surpass human performance in specific tasks is noted, particularly in generating content and automating processes [26][27]. This summary encapsulates the key discussions and insights from the conference call, highlighting the advancements and competitive landscape in the edge AI industry.
世界人工智能大会,AI教父Hinton告诉你的25个道理
3 6 Ke· 2025-07-29 23:58
Core Insights - Geoffrey Hinton, a prominent figure in AI, discussed the evolution of AI from symbolic reasoning to neural networks at the WAIC 2025, emphasizing the importance of understanding language through large language models (LLMs) [1][2][10] Group 1: Evolution of AI Understanding - For over 60 years, there have been two paradigms in AI: the logical heuristic paradigm focusing on symbolic reasoning and the biological paradigm emphasizing neural network learning [1] - Hinton's early model in 1985 aimed to merge these theories by predicting the next word based on features, which laid the groundwork for modern LLMs [2] - The development of LLMs has evolved from Hinton's initial models to more complex structures capable of processing vast amounts of input and creating intricate relationships [2][3] Group 2: Mechanism of Language Understanding - LLMs and human language understanding share similarities, converting language into features and integrating them across neural network layers for semantic comprehension [3] - Hinton uses the analogy of LEGO blocks to describe how words can be combined to form complex semantic structures, highlighting the flexible nature of language [3][4] - Understanding language is compared to deconstructing a protein molecule rather than creating a clear logical expression [3] Group 3: Knowledge Transfer and Collaboration - Knowledge transfer in humans is inefficient, often relying on explanations, while digital intelligence can share vast amounts of information directly [5][6] - Current technology allows for efficient knowledge migration and collaborative learning across different hardware setups, enhancing the capabilities of models like GPT-4 [6][7] - If independent intelligent agents can share weights and gradients, they can effectively exchange learned knowledge, leading to significant advancements [6][7] Group 4: AI's Future and Global Cooperation - Hinton warns of the potential dangers of AI surpassing human intelligence, emphasizing the need for control and ethical considerations in AI development [7][10] - The necessity for global cooperation in AI governance is highlighted, with a call for an international organization to ensure AI develops positively [8][9] - Hinton believes that the challenge of ensuring AI remains beneficial to humanity is one of the most critical issues of the era, requiring collective efforts [9][10]