LiveCodeBench Pro

Search documents
谢赛宁团队新基准让LLM集体自闭,DeepSeek R1、Gemini 2.5 Pro都是零分
机器之心· 2025-06-18 09:34
Core Insights - The article discusses the significant gap between current LLMs (Large Language Models) and human expert-level performance in competitive programming [2][18]. - A new benchmark, LiveCodeBench Pro, was introduced to evaluate LLMs against high-quality programming problems sourced from top competitions [4][6]. Evaluation of LLMs - LLMs have shown impressive results in code generation, surpassing human averages in some benchmarks, particularly in competitive programming [2][12]. - However, when evaluated without external tools, the best-performing models achieved a pass rate of only 53% on medium difficulty problems and 0% on high difficulty problems [12][18]. Benchmark Details - LiveCodeBench Pro includes 584 high-quality problems from competitions like Codeforces, ICPC, and IOI, with continuous updates to mitigate data contamination [6][10]. - Problems are categorized by algorithm type, and the performance of models is analyzed based on their failure submissions [7][12]. Model Performance Analysis - The analysis revealed that LLMs perform well on implementation-heavy problems but struggle with complex algorithmic reasoning and edge case analysis [17][18]. - Knowledge-intensive and logic-intensive problems are areas where LLMs excel, while observation-intensive problems and case work present significant challenges [20][22][24]. Comparison with Human Performance - LLMs exhibit a higher rate of algorithmic logic errors compared to humans, while they make fewer implementation logic errors [27][30]. - The models' inability to handle edge cases and their reliance on external tools for high scores highlight their limitations in reasoning capabilities [17][30]. Impact of Multiple Attempts - Increasing the number of attempts (pass@k) significantly improves model performance, although high-difficulty problems remain unsolved [33][36]. - The difference in performance between models with terminal access and those without indicates that tool usage plays a crucial role in enhancing scores [34][36]. Reasoning Capability Comparison - Enabling reasoning capabilities in models leads to substantial improvements in performance, particularly in combinatorial mathematics and knowledge-intensive categories [38][41]. - However, the enhancement is limited in observation-intensive categories, raising questions about the effectiveness of current reasoning methods in these areas [42].
大模型全员0分!谢赛宁领衔华人团队,最新编程竞赛基准出炉,题目每日更新禁止刷题
量子位· 2025-06-18 09:17
Core Viewpoint - The recent LiveCodeBench Pro benchmark test revealed that leading large language models (LLMs) performed poorly, with all models scoring zero points, indicating that they have not yet reached the level of human experts in competitive programming tasks [1][2][8]. Group 1: Benchmark Overview - LiveCodeBench Pro is a real-time benchmark testing platform that includes competitive programming problems from IOI, Codeforces, and ICPC [3]. - The question bank is updated daily to prevent LLMs from memorizing questions, ensuring a challenging evaluation environment [4][15]. - The benchmark consists of 584 top-tier competition problems, categorized by cognitive focus and difficulty level, with automatic selection based on normal distribution [15][17]. Group 2: Model Performance - The best-performing model achieved a pass rate of only 53% on medium difficulty questions, while the pass rate for hard questions was 0% [9][10]. - The performance metrics of various models showed that while they excelled in knowledge-intensive and logic-intensive problems, they struggled with observation-intensive problems [26][29]. - LLMs demonstrated advanced skills in precise implementations but fell short in algorithm design and complex case analysis [28][29]. Group 3: Testing Methodology - The testing team categorized problems based on underlying algorithmic concepts and recorded the official difficulty ratings from Codeforces [19]. - Each model's submissions were evaluated against human expert solutions, with results indicating that LLMs often failed to utilize provided sample inputs effectively [30][32]. - The team plans to release a completely new evaluation set quarterly to maintain the relevance and challenge of the testing environment [38]. Group 4: Team Composition - The LiveCodeBench Pro team consists of several Olympic competition winners, with a significant portion being of Chinese descent [40]. - Key team members have backgrounds in prestigious institutions and have previously interned at major tech companies, contributing to the project's credibility and expertise [41][44].