大模型长文本处理

Search documents
1万tokens是检验长文本的新基准,超过后18款大模型集体失智
量子位· 2025-07-17 02:43
Core Insights - The article discusses the performance decline of large language models (LLMs) as the input context length increases, highlighting that the decline is not uniform but occurs at specific token lengths [10][21][44] - A recent study by the Chroma team tested 18 mainstream LLMs, revealing that models like GPT-4.1 and Claude Sonnet 4 experience significant accuracy drops when processing longer inputs [8][9][19] Group 1: Performance Decline - As input length increases, model performance deteriorates, with a notable drop around 10,000 tokens, where accuracy can fall to approximately 50% [4][21] - Different models exhibit varying thresholds for performance decline, with some models losing accuracy earlier than others [6][7][19] - The study indicates that semantic similarity between the "needle" (target information) and the "problem" significantly affects performance, with lower similarity leading to greater declines [19][21] Group 2: Experimental Findings - Four controlled experiments were conducted to assess the impact of input length on model performance, focusing on factors like semantic similarity, interference information, and text structure [17][35][41] - The first experiment showed that as input length increased, models struggled more with low semantic similarity, leading to a sharper performance drop [19][21] - The second experiment demonstrated that the presence of interference items significantly reduced model accuracy, with multiple interference items causing a 30%-50% drop compared to baseline performance [26][28] Group 3: Structural Impact - The structure of the background text (haystack) also plays a crucial role in model performance, with coherent structures leading to more significant declines in accuracy compared to disordered structures [40][42] - The experiments revealed that most models performed worse with coherent structures as input length increased, while performance decline was less severe with disordered structures [41][44] - The findings suggest that LLMs face challenges in processing complex logical structures in long texts, indicating a need for improved handling of such inputs [41][44] Group 4: Implications and Future Directions - The results highlight the limitations of current LLMs in managing long-context tasks, prompting suggestions for clearer instructions and context management strategies [44] - Chroma, the team behind the research, aims to address these challenges by developing open-source tools to enhance LLM applications in processing long texts [45][48]