大模型评测

Search documents
o3-pro答高难题文字游戏引围观,OpenAI前员工讽刺苹果:这都不叫推理那什么叫推理
量子位· 2025-06-13 02:25
Core Viewpoint - OpenAI's latest reasoning model, o3-pro, demonstrates strong reasoning capabilities but has mixed performance in various evaluations, indicating a need for context and specific prompts to maximize its potential [1][2][3][4]. Evaluation Results - o3-pro achieved a correct answer in 4 minutes and 25 seconds during a reasoning test, showcasing its ability to process complex queries [2]. - In official evaluations, o3-pro surpassed previous models like o3 and o1-pro, becoming the best coding model from OpenAI [8]. - However, in the LiveBench ranking, o3-pro showed only a slight advantage over o3 with a score difference of 0.07, and it lagged behind o3 in agentic coding scores (31.67 vs 36.67) [11]. Contextual Performance - o3-pro excels in short context scenarios, showing improvement over o3, but struggles with long context processing, scoring 65.6 compared to Gemini 2.5 Pro's 90.6 in 192k context tests [15][16]. - The model's performance is highly dependent on the background information provided, as noted by user experiences [24][40]. User Insights - Bindu Reddy, a former executive at Amazon and Google, pointed out that o3-pro lacks proficiency in tool usage and agent capabilities [12]. - Ben Hylak, a former engineer at Apple and SpaceX, emphasized that o3-pro's effectiveness increases significantly when treated as a report generator rather than a chat model, requiring ample context for optimal results [22][24][26]. Comparison with Other Models - Ben Hylak found o3-pro's outputs to be superior to those of Claude Opus and Gemini 2.5 Pro, highlighting its unique value in practical applications [39]. - The model's ability to understand its environment and accurately describe tool usage has improved, making it a better coordinator in tasks [30][31]. Conclusion - The evaluation of o3-pro reveals that while it has advanced reasoning capabilities, its performance is contingent on the context and prompts provided, necessitating a strategic approach to maximize its utility in various applications [40][41].
DeepSeek-R1、o1都在及格线挣扎!字节开源全新知识推理测评集,覆盖285个学科
量子位· 2025-03-04 04:51
Core Viewpoint - The introduction of SuperGPQA, a new evaluation benchmark for large language models (LLMs), aims to address the limitations of existing models and provide a more comprehensive assessment of their capabilities [2][10][20]. Group 1: Limitations of Existing Models - Traditional evaluation benchmarks like MMLU and GPQA have become increasingly homogeneous, making it difficult to assess the true capabilities of models [1][8]. - These benchmarks typically cover fewer than 50 subjects, lacking diversity and long-tail knowledge, which limits their effectiveness [8][10]. - The accuracy of top models like GPT-4o has reached over 90% on traditional benchmarks, indicating a loss of differentiation in evaluating model performance [8][9]. Group 2: Introduction of SuperGPQA - SuperGPQA, developed by ByteDance's Doubao model team in collaboration with the M-A-P open-source community, covers 285 graduate-level subjects and includes 26,529 specialized questions [3][10]. - The evaluation framework was built over six months with contributions from nearly 100 scholars and engineers, ensuring a high-quality assessment process [2][6]. - The benchmark features a more challenging format with an average of 9.67 options per question, compared to the traditional 4-option format [10]. Group 3: Addressing Key Pain Points - SuperGPQA directly targets three major pain points in model evaluation: incomplete subject coverage, questionable question quality, and a lack of diverse evaluation dimensions [5][6]. - The benchmark employs a rigorous data construction process involving expert annotations, crowdsourced input, and collaborative validation with LLMs to ensure high-quality questions [6][11]. - The assessment includes a balanced distribution of question difficulty across various subjects, with 42.33% requiring mathematical calculations or rigorous reasoning [12]. Group 4: Performance Insights - In evaluations, even the strongest model, DeepSeek-R1, achieved only 61.82% accuracy on SuperGPQA, significantly lower than human graduate-level performance, which averages above 85% [4][20]. - The results indicate that while reasoning models dominate the leaderboard, their performance still lags behind human capabilities [17][20]. - The benchmark has been made publicly available on platforms like HuggingFace and GitHub, quickly gaining traction in the community [7][19]. Group 5: Future Implications - The development of SuperGPQA reflects ByteDance's commitment to enhancing model capabilities and addressing criticisms regarding its foundational work [22][24]. - The introduction of this benchmark may influence the future landscape of LLM evaluations, pushing for higher standards and more rigorous assessments [22][24].