基于注意力的窥视(ABP)

Search documents
AI解数学题只靠最后一个token
量子位· 2025-09-14 05:05
Core Insights - The research indicates that in mental arithmetic tasks, the majority of calculations are concentrated on the last token, rather than being distributed across all tokens, suggesting that global information access is not necessary for specific tasks like mental arithmetic [1][11]. Group 1: Research Methodology - Researchers employed Context-Aware Mean Ablation (CAMA) and attention-based peeking techniques to conduct a series of ablation experiments on models like Llama-3-8B [2][22]. - The experiments aimed to identify the "minimum computation" required for models to perform well by systematically removing or altering parts of the model [3]. - A sparse subgraph termed "All-for-One" (AF1) was identified, which allows efficient computation with minimal layers and limited information transfer [4][5]. Group 2: Model Structure and Functionality - In the AF1 structure, initial layers (L_wait) do not perform calculations related to their own values but instead focus on general preparatory tasks [7]. - Information is transferred to the last token through intermediate layers (L_transfer), which then independently performs the final calculations [8][9]. - This separation of general computation and input-specific computation highlights the model's efficiency in handling arithmetic tasks [10]. Group 3: Experimental Findings - The experiments revealed that Llama-3-8B requires only the first 14 layers for general computation, followed by 2 layers for information transfer, with the remaining layers dedicated to the last token's self-computation [24][26]. - AF1_llama demonstrated high fidelity across eight tasks, maintaining performance levels close to the original model [28][29]. - The importance of specific attention heads in arithmetic calculations was confirmed, with the model retaining approximately 95% accuracy even after removing nearly 60 heads, indicating redundancy in attention heads [30]. Group 4: Generalization and Limitations - AF1_llama was tested for its ability to generalize to other arithmetic forms, showing high accuracy in direct arithmetic tasks but failing in tasks requiring semantic understanding, such as word problems and Python code [32][34]. - Similar AF1-like subgraphs were found in Pythia and GPT-J models, although these models exhibited shorter waiting periods and less clear performance boundaries compared to Llama [35][36]. Group 5: Contributions and Innovations - This research contributes to the understanding of arithmetic reasoning and cross-token computation mechanisms in large language models [37]. - The methodologies introduced, CAMA and ABP, offer innovative approaches that could extend beyond arithmetic tasks to broader applications [37].