Workflow
多任务学习
icon
Search documents
LLM工业级自进化:北邮与腾讯AI Lab提出MoE-CL架构,解决大模型持续学习核心痛点
机器之心· 2025-09-30 00:27
Core Insights - The article discusses the urgent need for "self-evolution" in industrial-grade large language models (LLMs) to adapt dynamically to new tasks while retaining existing capabilities [2][6] - The proposed solution is the MoE-CL framework, which combines task-specific and shared LoRA experts with a GAN-based approach to ensure efficient knowledge transfer and retention [2][6][28] Group 1: Introduction and Background - The rapid growth of digital economy and diverse text data presents challenges in processing across different domains, necessitating a solution that efficiently handles new tasks while preserving knowledge from old tasks [5][6] - Traditional methods either require extensive resources for training separate models for each text type or struggle with performance imbalances when using a single model [5][6] Group 2: Methodology - MoE-CL focuses on knowledge accumulation and task adaptation in multi-task learning, utilizing LoRA technology to enhance Transformer blocks and reduce parameter updates [8][10] - The framework includes task-specific and shared LoRA experts, with a GAN module to separate and optimize task-specific and shared knowledge [8][12][14] Group 3: Experimental Results - In A/B testing within Tencent's real business scenarios, MoE-CL reduced manual intervention costs by 15.3% and achieved a high removal rate of 28.8% in task A, demonstrating significant operational efficiency [3][26] - MoE-CL outperformed existing methods in accuracy and stability across various tasks, showcasing its robust performance in dynamic environments [21][22] Group 4: Conclusion - The MoE-CL framework effectively addresses the challenges of catastrophic forgetting and knowledge transfer through its unique architecture, enabling continuous learning and adaptation in LLMs [28]
FlowDrive:一个具备软硬约束的可解释端到端框架(上交&博世)
自动驾驶之心· 2025-09-22 23:34
Core Insights - The article introduces FlowDrive, a novel end-to-end driving framework that integrates energy-based flow field representation, adaptive anchor trajectory optimization, and motion-decoupled trajectory generation to enhance safety and interpretability in autonomous driving [4][45]. Group 1: Introduction and Background - End-to-end autonomous driving has gained attention for its potential to simplify traditional modular pipelines and leverage large-scale data for joint learning of perception, prediction, and planning tasks [4]. - A mainstream research direction involves generating Bird's Eye View (BEV) representations from multi-view camera inputs, which provide structured spatial views beneficial for downstream planning tasks [4][6]. Group 2: FlowDrive Framework - FlowDrive introduces energy-based flow fields in the BEV space to explicitly model geometric constraints and rule-based semantics, enhancing the effectiveness of BEV representations [7][15]. - The framework includes a flow-aware anchor trajectory optimization module that aligns initial trajectories with safe and goal-oriented areas, improving spatial effectiveness and intention consistency [15][22]. - A task-decoupled diffusion planner separates high-level intention prediction from low-level trajectory denoising, allowing for targeted supervision and flow field conditional decoding [9][27]. Group 3: Experimental Results - Experiments on the NAVSIM v2 benchmark dataset demonstrate that FlowDrive achieves state-of-the-art performance, with an Extended Predictive Driver Model Score (EPDMS) of 86.3, surpassing previous benchmark methods [3][40]. - FlowDrive shows significant advantages in safety-related metrics such as Drivable Area Compliance (DAC) and Time to Collision (TTC), indicating superior adherence to driving constraints and hazard avoidance capabilities [40][41]. - The framework's performance is validated through ablation studies, showing that removing any core component leads to significant declines in overall performance [43][47]. Group 4: Technical Details - The flow field learning module encodes dense, physically interpretable spatial gradients to provide fine-grained guidance for trajectory planning [20][21]. - The perception module utilizes a Transformer-based architecture to effectively fuse multi-modal sensor inputs into a compact and semantically rich BEV representation [18][37]. - The training process involves a composite loss function that supervises trajectory planning, anchor trajectory optimization, flow field modeling, and auxiliary perception tasks [30][31][32][34].
LoRA中到底有多少参数冗余?新研究:砍掉95%都能保持高性能
机器之心· 2025-05-02 04:39
Core Viewpoint - The article introduces the LoRI technology, which demonstrates that significantly reducing the trainable parameters of LoRA can still maintain strong model performance, achieving comparable or superior results to full fine-tuning and other methods while using only 5% of LoRA's parameters [1][9]. Summary by Sections LoRA and Its Limitations - LoRA is widely adopted for parameter-efficient fine-tuning (PEFT) but still incurs significant memory overhead, especially in large models [3][4]. - Recent research indicates substantial redundancy in incremental parameters, prompting the development of LoRI, which reduces the number of trainable parameters while preserving model knowledge [4]. LoRI Methodology - LoRI keeps the low-rank matrix A fixed as a random projection and uses a task-specific sparse mask to train matrix B, allowing for significant parameter reduction [4][13]. - Even with 90% sparsity in B, LoRI maintains good performance, indicating that the adaptation process does not require updating A [4][17]. Multi-Task Learning and Adapter Merging - Multi-task learning is essential for creating versatile models, but training on mixed datasets is costly. LoRI allows for the merging of existing models without retraining, effectively combining LoRA adapters for multi-task capabilities [7]. - Directly merging heterogeneous LoRA can lead to parameter interference, but LoRI mitigates this by mapping task-specific adapters to nearly orthogonal subspaces [7][20]. Continuous Learning and Safety - LoRI provides a lightweight continuous learning method that maintains safety while adapting to new tasks, addressing the challenge of catastrophic forgetting [8][22]. - The two-phase training process for safety adapters shows that LoRI-S outperforms other methods in retaining safety alignment, even under aggressive sparsity [22][23]. Performance Evaluation - Extensive experiments on various benchmarks show that LoRI achieves or exceeds the performance of full fine-tuning and other PEFT methods while using 95% fewer trainable parameters [9][19]. - In single-task performance, LoRI variants demonstrate competitive results across natural language understanding, mathematics, programming, and safety tasks [19][20]. Conclusion - Overall, LoRI presents an effective and lightweight approach to building safe adapters that support downstream task adaptation while maintaining alignment [23].