Workflow
大语言模型
icon
Search documents
谢赛宁团队新基准让LLM集体自闭,DeepSeek R1、Gemini 2.5 Pro都是零分
机器之心· 2025-06-18 09:34
Core Insights - The article discusses the significant gap between current LLMs (Large Language Models) and human expert-level performance in competitive programming [2][18]. - A new benchmark, LiveCodeBench Pro, was introduced to evaluate LLMs against high-quality programming problems sourced from top competitions [4][6]. Evaluation of LLMs - LLMs have shown impressive results in code generation, surpassing human averages in some benchmarks, particularly in competitive programming [2][12]. - However, when evaluated without external tools, the best-performing models achieved a pass rate of only 53% on medium difficulty problems and 0% on high difficulty problems [12][18]. Benchmark Details - LiveCodeBench Pro includes 584 high-quality problems from competitions like Codeforces, ICPC, and IOI, with continuous updates to mitigate data contamination [6][10]. - Problems are categorized by algorithm type, and the performance of models is analyzed based on their failure submissions [7][12]. Model Performance Analysis - The analysis revealed that LLMs perform well on implementation-heavy problems but struggle with complex algorithmic reasoning and edge case analysis [17][18]. - Knowledge-intensive and logic-intensive problems are areas where LLMs excel, while observation-intensive problems and case work present significant challenges [20][22][24]. Comparison with Human Performance - LLMs exhibit a higher rate of algorithmic logic errors compared to humans, while they make fewer implementation logic errors [27][30]. - The models' inability to handle edge cases and their reliance on external tools for high scores highlight their limitations in reasoning capabilities [17][30]. Impact of Multiple Attempts - Increasing the number of attempts (pass@k) significantly improves model performance, although high-difficulty problems remain unsolved [33][36]. - The difference in performance between models with terminal access and those without indicates that tool usage plays a crucial role in enhancing scores [34][36]. Reasoning Capability Comparison - Enabling reasoning capabilities in models leads to substantial improvements in performance, particularly in combinatorial mathematics and knowledge-intensive categories [38][41]. - However, the enhancement is limited in observation-intensive categories, raising questions about the effectiveness of current reasoning methods in these areas [42].
刚刚,Gemini 2.5系列模型更新,最新轻量版Flash-Lite竟能实时编写操作系统
机器之心· 2025-06-18 01:24
机器之心报道 编辑:Panda 刚刚,Gemini 系列模型迎来了一波更新: 谷歌 CEO Sundar Pichai 发推表示新推出的 Gemini 2.5 Flash-Lite 是目前性价比最高的 2.5 系列模型。 可以看到,谷歌对 2.5 Flash-Lite 的定位是适合用于「量大且注重成本效率的任务」。相较之下,2.5 Pro 适合编程和高复杂度任务,2.5 Flash 则居中,更适合需要 较快速度的日常任务。 Gemini 2.5 Pro 稳定版发布且已全面可用,其与 6 月 5 日的预览版相比无变化。 Gemini 2.5 Flash 稳定版发布且已全面可用,其与 5 月 20 日的预览版相比无变化,但价格有更新。 新推出了 Gemini 2.5 Flash-Lite 并已开启预览。 | | | 2.5 Flash-Lite | 2.5 Flash | 2.5 Pro | | --- | --- | --- | --- | --- | | | | THINKING OFF | THINKING | THINKING | | Best for | | High volume cost- | Fa ...
OpenAI以65亿美元收购Jony Ive的io背后,软硬件结合的AI原生硬件公司正在崛起
3 6 Ke· 2025-06-17 23:51
Core Insights - OpenAI has acquired Jony Ive's company io for $6.5 billion to develop a series of hardware products, indicating a strategic move towards integrating hardware with AI capabilities [1] - The emergence of AI-native hardware is facing challenges, including slow market penetration and user acceptance due to overly ambitious product designs [2][4] - The second wave of AI-native hardware is focusing on specific applications, such as meeting transcription and summarization, which have clear user demand and willingness to pay [6][8] Group 1: AI Hardware Development - The development of AI-native hardware is driven by advancements in large language models, enabling more sophisticated human-computer interactions [2] - Initial AI hardware products struggled due to high learning costs and lack of clear application scenarios, leading to poor market performance [4][5] - Companies are now focusing on refining their products to meet specific user needs, resulting in more mature offerings [9] Group 2: Market Dynamics - The pricing of AI hardware, such as the AI Pin at $699 and Apple's Vision Pro at $3,499, limits their market penetration due to high costs compared to traditional smartphones [5] - The supply chain challenges in Silicon Valley hinder rapid hardware iteration and competitive pricing, making it difficult for these companies to gain market share [5][15] - Chinese entrepreneurs benefit from a robust AI hardware supply chain and a large market, positioning them well for future growth in this sector [15][16] Group 3: Future Prospects - The evolution of AI-native hardware may eventually lead to the replacement of smartphones and tablets, necessitating the development of AI-native operating systems [13][14] - The potential for AI hardware to penetrate various sectors, including education and healthcare, is significant as capabilities improve and applications expand [12][16] - Companies are increasingly focusing on specific use cases, such as educational tools and personal companion robots, to drive adoption and revenue [10][12]
MiniMax开源首个推理模型,456B参数,性能超DeepSeek-R1,技术报告公开
3 6 Ke· 2025-06-17 08:15
Core Insights - MiniMax has launched the world's first open-source large-scale hybrid architecture inference model, MiniMax-M1, with a five-day continuous update plan [2] Model Specifications - The M1 model has a parameter scale of 456 billion, activating 45.9 billion parameters per token, supporting 1 million context inputs and the longest 80,000 token inference output in the industry, which is 8 times that of DeepSeek-R1 [4] - Two versions of the MiniMax-M1 model were trained with thinking budgets of 40k and 80k [4] Training and Cost - The training utilized 512 H800 units over three weeks, costing approximately $537,400 (around 3.859 million RMB), which is an order of magnitude lower than initial cost expectations [7] - The M1 model is available for unlimited free use on the MiniMax app and web [7] API Pricing Structure - The API pricing for M1 is tiered based on input length: - 0-32k input: 0.8 RMB/million tokens input, 8 RMB/million tokens output - 32k-128k input: 1.2 RMB/million tokens input, 16 RMB/million tokens output - 128k-1M input: 2.4 RMB/million tokens input, 24 RMB/million tokens output [7][11] - Compared to DeepSeek-R1, M1's first tier input price is 80% and output price is 50% of DeepSeek-R1's, while the second tier input price is 1.2 times higher [9] Performance Evaluation - MiniMax-M1 outperforms other models like DeepSeek-R1 and Qwen3-235B in complex software engineering, tool usage, and long context tasks [13][14] - In the MRCR test, M1's performance is slightly lower than Gemini 2.5 Pro but better than other models [13] - In the SWE-bench Verified test set, M1-40k and M1-80k perform slightly worse than DeepSeek-R1-0528 but better than other open-source models [14] Technical Innovations - M1 employs a mixed expert (MoE) architecture and a lightning attention mechanism, allowing efficient scaling for long input and complex tasks [16] - The model utilizes large-scale reinforcement learning (RL) for training, with a new CISPO algorithm that enhances performance by optimizing importance sampling weights [16][17] Future Directions - MiniMax emphasizes the need for "Language-Rich Mediator" agents to handle complex scenarios requiring dynamic resource allocation and multi-round reasoning [19]
大模型“拼好题”,45K数据撬动18%提升,数学问题拒绝死记硬背 | MathFusion
量子位· 2025-06-17 07:41
MathFusion通过三种"融合策略",将不同的数学问题巧妙地结合起来,生成封装了二者关系和结构的新问题。 △ 越靠左上角,模型表现越好且数据效率越高。 核心思想:三种"融合策略" MathFusion团队 投稿 量子位 | 公众号 QbitAI 当前数学领域的数据生成方法常常局限于对单个问题进行改写或变换,好比是让学生反复做同一道题的变种,却忽略了数学题目之间内在的关 联性。 为了打破这种局限,让大模型学会"串联"与"并联"知识,上海AI Lab、人大高瓴等团队联合提出了 MathFusion ,通过指令融合增强大语言 模型解决数学问题的能力。 仅使用45K的合成指令,MathFusion在多个基准测试中平均准确率提升了18.0个百分点,展现了卓越的数据效率和性能。 顺序融合(Sequential Fusion) 将两个问题串联起来,前一个问题的答案作为后一个问题的某个输入条件。这就像解决一个多步骤问题,模型需要先解出第一步,才能进 行第二步,从而学会处理问题间的依赖关系。 并列融合(Parallel Fusion) 将两个相似的问题融合在一起,对它们的数学概念进行识别和融合,在原来问题的基础上提出一道新 ...
MiniMax重磅开源M1模型:百万上下文超DeepSeek R1,实现性能与效率双杀
AI科技大本营· 2025-06-17 02:32
Core Insights - MiniMax has officially open-sourced its latest large language model, MiniMax-M1, marking a significant development in the AI landscape [2][4] - MiniMax-M1 is recognized as the world's first open-weight large-scale hybrid attention inference model, showcasing substantial breakthroughs in performance and inference efficiency [4][6] Model Specifications - MiniMax-M1 features a parameter scale of 456 billion, with each token activating approximately 45.9 billion parameters, and supports a maximum context length of 1 million tokens, which is 8 times longer than that of DeepSeek R1 [7][12] - The model's computational load (FLOPs) for generating 100,000 tokens is only 25% of that required by DeepSeek R1, indicating a significant advantage in long text processing tasks [7][12] Training and Efficiency - The training of MiniMax-M1 utilized a large-scale reinforcement learning (RL) strategy, optimizing performance across various tasks, including mathematical reasoning and software engineering [9][11] - The complete RL training of MiniMax-M1 was accomplished in three weeks using 512 H800 GPUs, with a cost of approximately $534,700, demonstrating high efficiency and cost-effectiveness [11] Performance Comparison - MiniMax-M1 is available in two versions, with maximum generation lengths of 40K and 80K tokens, and has shown superior performance in complex software engineering, tool usage, and long-context tasks compared to leading open-weight models like DeepSeek-R1 and Qwen3-235B [12][19] - In benchmark tests, MiniMax-M1 outperformed other models in various categories, including long-context understanding and tool usage, establishing itself as a strong contender in the AI model landscape [19]
刚刚,LMArena最新模型榜单出炉!DeepSeek-R1网页编程能力赶超了Claude Opus 4
机器之心· 2025-06-17 00:10
Core Viewpoint - DeepSeek has made significant advancements in the open-source model space with the release of its upgraded R1 inference model (0528), which shows competitive performance against proprietary models [2][4][10]. Performance Summary - The R1-0528 model has improved benchmark performance, enhancing front-end functionality, reducing hallucinations, and supporting JSON output and function calls [3]. - In the latest performance rankings from LMArena, DeepSeek-R1 (0528) achieved an overall ranking of 6th, and it is the top-ranked open model [5][4]. - Specific rankings in various categories include: - 4th in Hard Prompt testing - 2nd in Coding testing - 5th in Math testing - 6th in Creative Writing testing - 9th in Instruction Following testing - 8th in Longer Query testing - 7th in Multi-Turn testing [6][7]. Competitive Landscape - In the WebDev Arena platform, DeepSeek-R1 (0528) is tied for first place with other proprietary models like Gemini-2.5-Pro-Preview-06-05 and Claude Opus 4, surpassing Claude Opus 4 in score [8]. - The performance of DeepSeek-R1 (0528) is seen as a milestone, particularly in the AI programming domain, where it competes closely with established models like Claude [10]. User Engagement - The strong performance of DeepSeek-R1 (0528) has generated increased interest and usage among users, prompting discussions about user experiences [9][11].
AI投研应用系列之二:从大模型到智能体,扣子Coze在金融投研中的应用
Quantitative Models and Construction Methods - **Model Name**: Report/Document Interpretation Workflow - **Model Construction Idea**: Automate the process of interpreting financial reports and extracting key information, including formulas, using AI agents and workflows[28][30] - **Model Construction Process**: 1. Use Coze's official file-reading plugin to extract document content and formula structures[30] 2. Configure prompt logic and output format using LLM nodes in the workflow[30] 3. Test the workflow by inputting a URL of a quantitative research paper, where the AI agent summarizes key information and accurately interprets formulas[31] - **Model Evaluation**: Demonstrates the ability to process complex financial documents and provide accurate formula interpretations, enhancing efficiency in financial research[31] - **Model Name**: Real-Time Financial Data Analysis Workflow - **Model Construction Idea**: Automate the retrieval and analysis of real-time financial data from web sources or plugins[35][38] - **Model Construction Process**: 1. Construct a workflow with a code-processing node to generate complete URLs based on user-input stock codes[38] 2. Use a data-scraping node to retrieve real-time financial data from websites like Sina Finance[35][38] 3. Input the data into the DeepSeek LLM node for comprehensive analysis, focusing on profitability, solvency, and operational efficiency[39] - **Model Evaluation**: Provides timely and structured financial insights, enabling informed decision-making in investment analysis[39] - **Model Name**: Research Report Summarization Workflow - **Model Construction Idea**: Automate the process of extracting and summarizing content from multiple research reports or news articles[52][55] - **Model Construction Process**: 1. Use Coze plugins to scrape HTML content from websites like Eastmoney[55] 2. Employ loop nodes to process multiple reports and extract relevant content[55] 3. Store the extracted data (e.g., titles, content, institution names, links) in Feishu multi-dimensional tables for further analysis[57] - **Model Evaluation**: Effectively consolidates and organizes large volumes of research data, improving accessibility and usability for financial analysts[57] Model Backtesting Results - **Report/Document Interpretation Workflow**: Successfully summarized key information and accurately interpreted formulas from a quantitative research paper[31] - **Real-Time Financial Data Analysis Workflow**: Generated detailed financial analyses based on real-time data, covering multiple financial metrics such as ROE, net profit, and cash flow[39][48] - **Research Report Summarization Workflow**: Efficiently extracted and stored structured data from multiple research reports, enabling streamlined analysis and reporting[57][60] Quantitative Factors and Construction Methods - **Factor Name**: None explicitly mentioned in the report Factor Backtesting Results - **Factor Results**: None explicitly mentioned in the report
本周精华总结:谷歌AI的进阶之路:从技术积累到发现新知的未来探索
老徐抓AI趋势· 2025-06-15 03:41
欢迎大家 点击【预约】 按钮 预约 我 下一场直播 本文重点 观点来自: 6 月 9 日本周一直播 谷歌未来的目标是实现通用人工智能(AGI),即让机器具备与人脑同等的通用智能能力。DeepMind 团队对AGI有清晰定义,认为通用智能即机器能像人脑一样处理各种任务。尽管现阶段AI在某些简单任 务仍有不足,但正在不断弥补"认知漏洞",逐步向真正的通用智能靠近。 【 强 烈建议直接看】 本段视频精华,逻辑更完整 谷歌与特斯拉被认为是最接近实现"世界模型"的两家公司,谷歌依托YouTube海量视频数据,特斯拉则 依靠车辆摄像头采集的现实世界数据。这些多维度的现实数据对训练通用智能极为关键,远超单一文本 数据的深度。 文字版速览 总的来说,谷歌的AI技术不仅扎实,更具备创新和超越的潜力。未来几年,谷歌AI有望在智能发现、 模型完善以及通用智能方向实现突破,继续保持其在AI领域的领先地位。作为关注AI发展的朋友,我 认为谷歌值得持续跟踪和关注。 谷歌作为AI领域的重要玩家,其发展历程和技术积累值得深入分析。谷歌母公司Alphabet的架构设计十 分巧妙,它将多个创新子公司独立运营,如Google、DeepMind、I ...
ICML 2025 | 千倍长度泛化!蚂蚁新注意力机制GCA实现16M长上下文精准理解
机器之心· 2025-06-13 15:45
该工作第一作者为蚂蚁技术研究院副研究员胡翔,蚂蚁技术研究院高级研究员武威为通讯作者。 在大语言模型如火如荼的当下,长文本建模仍然是一个极具挑战的问题。纠其根源,一方面在于主流 LLMs 的架构 Transformers 中平方复杂度及随序列长度线性增 长的推理阶段显存开销;另一方面在于 full-attention 有限的外推能力,难以泛化到远超预训练阶段长度的输入。 而高效处理长上下文能力,除了简单的工业界降本增效的需求外,还涉及通用人工智能 (AGI) 的核心问题:具有永久记忆的智能体。如果将人类从出生开始接收 到的信息视作长上下文,人类拥有记忆无非是访问这些上下文。因此记忆可以看作是超长上下文访问能力,而拥有与用户所有对话记忆的智能体,很可能为大语 言模型公司构建数据护城河 (事实上,OpenAI 已经开放了类似能力)。 近日,蚂蚁的研究团队为这个问题带来了一个新思路。就像人类开卷考试只会挑和当前问题相关的关键页作为参考,语言模型也可以只关注与当前上下文相关的 过去片段。以此为出发点,他们提出一种 基于因果检索的注意力机制 GCA (Grouped Cross Attention),完全端到端地学习如何 ...