给大模型「持续注入新知识」,北航CASE框架:编辑千次不失忆,额外参数不到1MB丨WWW'26
量子位·2026-03-27 05:10

Core Viewpoint - The article discusses the introduction of the CASE framework by a team from Beihang University, which addresses the challenges of lifelong model editing in large language models (LLMs) by quantifying conflicts and optimizing sensitive neurons, leading to improved accuracy and efficiency in knowledge updates [1][3][30]. Group 1: Challenges in Lifelong Model Editing - Existing methods face two main issues: "blindly adding parameters" which leads to excessive resource consumption and "indiscriminate parameter tuning" that causes knowledge conflicts and catastrophic forgetting [4][3]. - The "knowledge aging" and "fact hallucination" phenomena are prevalent in LLMs, making the goal of lifelong model editing particularly challenging [3][4]. Group 2: The CASE Framework - The CASE framework consists of two core components: the Conflict-Assessed Editing Allocation (CAA) module and the Knowledge-sensitive Neuron Tuning (KNT) strategy [6][8]. - The CAA module quantifies conflicts and allocates parameter space accordingly, ensuring that new knowledge is either shared or isolated based on compatibility [8][14]. - The KNT strategy focuses on tuning only the most sensitive neurons related to the current knowledge, thus preventing unnecessary updates to irrelevant parameters [16][17]. Group 3: Experimental Results - In experiments, CASE demonstrated an average accuracy improvement of nearly 10% over existing methods after 1000 continuous knowledge edits, while maintaining parameter efficiency with additional parameters of less than 1MB [2][19]. - The framework showed superior performance in two core tasks: achieving 82% generalization in the ZsRE lifelong knowledge editing task and reducing perplexity by 60% in the SelfCheckGPT task [21][22]. - CASE maintained a high accuracy of 95% after 1000 edits, significantly outperforming other methods which experienced substantial accuracy declines [24]. Group 4: Efficiency and Future Applications - The CASE framework is highly efficient, requiring minimal additional parameters and maintaining fast inference times, making it suitable for real-world applications [23][30]. - Future explorations will focus on applying CASE to multimodal models and unstructured data editing, enhancing the lifelong learning capabilities of large models across various domains [31].

给大模型「持续注入新知识」,北航CASE框架:编辑千次不失忆,额外参数不到1MB丨WWW'26 - Reportify