Workflow
Devstral
icon
Search documents
AI动态汇总:Claude4系列发布,谷歌上线编程智能体Jules
China Post Securities· 2025-05-27 13:43
Quantitative Models and Construction 1. Model Name: Claude Opus 4 - **Model Construction Idea**: Designed for complex reasoning and software development tasks, focusing on enhancing AI's ability to handle intricate codebases and long-term memory tasks [12][15] - **Model Construction Process**: - Utilizes advanced memory processing capabilities to autonomously create and maintain "memory files" for storing critical information during long-term tasks [16] - Demonstrated ability to execute complex tasks such as navigating and completing objectives in the Pokémon game by creating and using "navigation guides" [16] - Achieved significant improvements in understanding and editing complex codebases, as well as performing cross-file modifications with high precision [15][17] - **Model Evaluation**: The model significantly expands the boundaries of AI capabilities, particularly in coding and reasoning tasks, and demonstrates industry-leading performance in understanding complex codebases [15][16] 2. Model Name: Claude Sonnet 4 - **Model Construction Idea**: A balanced model focusing on cost-efficiency while maintaining strong coding and reasoning capabilities [12][16] - **Model Construction Process**: - Built upon the Claude Sonnet 3.7 model, with improvements in instruction adherence and reasoning [16] - Demonstrated reduced tendencies to exploit system vulnerabilities, with a 65% decrease in such behaviors compared to its predecessor [16] - **Model Evaluation**: While not as powerful as Opus 4, it strikes an optimal balance between performance and efficiency, making it a practical choice for broader applications [16] 3. Model Name: Cosmos-Reason1 - **Model Construction Idea**: Designed for physical reasoning tasks, combining physical common sense with embodied reasoning to enable AI systems to understand spatiotemporal relationships and predict behaviors [29][30] - **Model Construction Process**: - Utilizes a hybrid Mamba-MLP-Transformer architecture, combining time-series modeling with long-context processing [30] - Multimodal processing pipeline includes a vision encoder (ViT) for semantic feature extraction, followed by alignment with text tokens and input into a 56B or 8B parameter backbone network [30] - Training involves four stages: 1. Vision pretraining for cross-modal alignment 2. Supervised fine-tuning for foundational capabilities 3. Specialized fine-tuning for physical AI knowledge (spatial, temporal, and basic physics) 4. Reinforcement learning using GRPO algorithms with innovative reward mechanisms based on spatiotemporal puzzles [30] - **Model Evaluation**: Demonstrates groundbreaking capabilities in physical reasoning, including long-chain reasoning (37+ steps) and spatiotemporal prediction, outperforming other models in physical common sense and embodied reasoning benchmarks [34][35] --- Model Backtesting Results 1. Claude Opus 4 - **SWE-bench Accuracy**: 72.5% [12] - **TerminalBench Accuracy**: 43.2% [12] 2. Claude Sonnet 4 - **SWE-bench Accuracy**: 72.7% (best performance among Claude models) [16] 3. Cosmos-Reason1 - **Physical Common Sense Accuracy**: 60.2% across 426 videos and 604 tests [34] - **Embodied Reasoning Performance**: Improved by 10% in robotic arm operation scenarios [34] - **Intuitive Physics Benchmark**: Achieved an average score of 81.5% after reinforcement learning, outperforming other models by a significant margin [35] --- Quantitative Factors and Construction 1. Factor Name: Per-Layer Embeddings (PLE) in Gemma 3n - **Factor Construction Idea**: Reduces memory requirements for AI models while maintaining high performance on mobile devices [26][27] - **Factor Construction Process**: - Implements PLE technology to optimize memory usage at the layer level - Combined with KVC sharing and advanced activation quantization to enhance response speed and reduce memory consumption [27] - **Factor Evaluation**: Enables high-performance AI applications on devices with limited memory, achieving a 1.5x improvement in response speed compared to previous models [27] 2. Factor Name: Deep Think in Gemini 2.5 Pro - **Factor Construction Idea**: Enhances reasoning by generating and evaluating multiple hypotheses before responding [43][44] - **Factor Construction Process**: - Implements a parallel reasoning architecture inspired by AlphaGo's decision-making mechanism - Dynamically adjusts "thinking budgets" (token usage) to balance response quality and computational cost [43][44] - **Factor Evaluation**: Achieves superior performance in complex reasoning tasks, with an 84.0% score in MMMU tests, significantly outperforming competitors [43][44] --- Factor Backtesting Results 1. Per-Layer Embeddings (PLE) in Gemma 3n - **WMT24++ Multilingual Benchmark**: Scored 50.1%, demonstrating strong performance in non-English languages [27] 2. Deep Think in Gemini 2.5 Pro - **MMMU Score**: 84.0% [43] - **MRCR 128K Test (Long-Term Memory Accuracy)**: 83.1%, significantly higher than OpenAI's comparable models [44]
腾讯研究院AI速递 20250523
腾讯研究院· 2025-05-22 15:09
Group 1: OpenAI Innovations - OpenAI's Responses API now supports MCP services, allowing developers to connect external services with simple configurations, significantly reducing development complexity [1] - The updated API enhances security controls through the allowed_tools parameter and permission management to ensure safe tool usage by agents [1] - New features include image generation, Code Interpreter, file search, background mode, inference summaries, and encrypted inference items [1] Group 2: Microsoft's Magentic-UI - Microsoft launched the open-source Web Agent project Magentic-UI, enabling automatic web browsing, file reading/writing, and code execution, with user monitoring and control [2] - The system employs a collaborative planning and execution mechanism, generating task plans for user confirmation and allowing real-time intervention during execution [2] - The project integrates innovative technologies like neural style engines, component DNA mapping, and performance prediction for intelligent style conversion and component reuse [2] Group 3: Mistral's Devstral Model - Mistral, in collaboration with All Hands AI, released the open-source language model Devstral, featuring 24 billion parameters and capable of running on a single RTX 4090 or a 32GB RAM Mac [3] - Devstral scored 46.8% on the SWE-Bench Verified benchmark, outperforming GPT-4.1-mini and other open-source models, showcasing excellent code understanding and problem-solving abilities [3] - The model is released under the Apache 2.0 license for commercial use, with pricing set at $0.10 per million input tokens and $0.30 per million output tokens [3] Group 4: xAI's Live Search API - xAI introduced the Live Search API, providing real-time data access for Grok AI, enabling retrieval of the latest information from X platform, web content, and breaking news [4][5] - The API offers flexible search control features, including enabling/disabling searches, limiting result numbers, and specifying time ranges and domains, combined with DeepSearch for inference display [5] - A Python SDK is available, with free beta testing until June 5, 2025, allowing developers to implement real-time information queries and research assistance [5] Group 5: OpenAI's Acquisition of Jony Ive's Team - OpenAI acquired AI device startup io for $6.5 billion, gaining a hardware team led by former Apple Chief Design Officer Jony Ive, with the deal expected to close by summer [6] - io is developing new forms of AI devices aimed at reducing screen time, including headphones, wearables, and AI home devices, with a projected release in 2026 [6] - The associated company LoveFrom will continue to operate independently while taking on more design responsibilities for OpenAI, including ChatGPT interface and voice interaction products [6] Group 6: Kunlun Wanwei's Skywork Super Agents - Kunlun Wanwei launched the Skywork Super Agents, integrating five expert agents and one general agent for one-stop generation of documents, PPTs, and spreadsheets [7] - The product's core is based on deep research technology, supporting deep information retrieval and traceable content generation at only 40% of OpenAI's costs, with the framework open-sourced [7] - System features include automated requirement clarification, information tracing, and personal knowledge base functionality, allowing users to upload various file formats to build knowledge bases [7] Group 7: Microsoft's Aurora Model - Microsoft introduced the first large-scale atmospheric foundation model, Aurora, trained on millions of hours of atmospheric data, achieving computation speeds 5000 times faster than the most advanced numerical forecasting systems [8] - Aurora excels in predicting air quality, wave patterns, tropical cyclone trajectories, and high-resolution weather, maintaining high accuracy even in data-scarce regions and extreme weather [8] - The model utilizes a 3D Swin Transformer architecture, allowing fine-tuning for different application areas, with a training cycle of only 4-8 weeks, and future expansion into ocean circulation and seasonal weather predictions [8] Group 8: Gartner's Principles for Intelligent Applications - Gartner identified that GenAI will drive enterprise software from auxiliary tools to intelligent agents, outlining five principles for building intelligent applications: adaptive experience, embedded intelligence, autonomous orchestration, interconnected data, and composable architecture [9] - Intelligent applications emphasize personalized experiences and proactive services, enabling cross-system tasks through natural language interactions, with AI capabilities deeply embedded in business logic for process optimization [9] - Enterprises need to maintain balanced investments in the five principles while upgrading foundational data, processes, architecture, and experiences to ensure intelligent applications transition from pilot demonstrations to scalable value applications [9] Group 9: a16z's Insights on AI Programming - The AI coding market has become the second-largest AI market after chatbots, valued at approximately $3 trillion, with developers rapidly adopting this tool as early technology adopters [10] - AI programming will not completely replace traditional programming; understanding foundational abstractions and system architecture remains crucial, with developer roles shifting towards product management or QA engineering [10] - New demographics and methods are fostering a new software paradigm, similar to the WordPress era, where AI lowers the barrier to "writing code," yet the depth and complexity of software development still require professional knowledge [10]
性能碾压GPT-4.1-mini!Mistral开源Devstral,还能在笔记本上跑
机器之心· 2025-05-22 10:25
Core Viewpoint - Mistral, a French AI startup, has re-entered the open-source AI community by launching a new open-source language model, Devstral, which features 24 billion parameters and is designed for local deployment and device-side use [2][3]. Group 1: Model Features and Performance - Devstral can run on a single RTX 4090 GPU or a Mac with 32GB RAM, making it an ideal choice for local deployment [3]. - The model is available under a permissive Apache 2.0 license, allowing developers and organizations to deploy, modify, and commercialize it without restrictions [4]. - Devstral is specifically designed to address real-world software engineering challenges, such as identifying relationships between components in large codebases and detecting subtle errors in complex functions [4][5]. - In the SWE-Bench Verified benchmark, Devstral achieved a score of 46.8%, outperforming all previously released open-source models and surpassing several closed-source models, including GPT-4.1-mini by over 20 percentage points [6][7]. - When evaluated in the same testing framework, Devstral significantly outperformed larger models like Deepseek-V3-0324 (671B) and Qwen3 232B-A22B [9]. Group 2: Accessibility and Pricing - Devstral can be accessed through Mistral's Le Platforme API, with pricing set at $0.10 per million input tokens and $0.30 per million output tokens [12].
24B模型编程超DeepSeek全家桶,32G内存苹果电脑就能跑,专门针对真实GitHub Issue训练
量子位· 2025-05-22 03:21
Core Viewpoint - Mistral has launched a new open-source programming model called Devstral, which outperforms existing models in software engineering tasks while being lightweight enough to run on consumer-grade hardware [2][3][4]. Group 1: Product Features - Devstral is specifically designed for programming agents, addressing the limitations of traditional large models that struggle with real-world software engineering challenges [4]. - The model was trained on real GitHub issues, focusing on understanding code context, relationships between components, and identifying subtle errors in complex functions [5]. - In the SWE-Bench Verified benchmark test, Devstral achieved state-of-the-art performance among open-source models and surpassed many closed-source models of similar parameter size [5]. Group 2: Development and Collaboration - Devstral was developed in collaboration with All Hands AI and is released under the Apache 2.0 open-source license, indicating a more open approach compared to previous models [7]. - All Hands AI focuses on building intelligent agent frameworks rather than developing foundational models, promoting the idea of "writing less code, doing more" [17]. - Devstral can integrate with All Hands AI's frameworks like OpenHands or SWE-Agent, enabling it to perform tasks typically done by human programmers [18]. Group 3: Current Status and Future Plans - Devstral is currently in a research preview phase, with the team working on enhancing the model's capabilities and planning to release a more powerful agent coding model in the coming weeks [22]. - Since its launch in April of the previous year, OpenHands has gained over 50,000 stars on GitHub, indicating significant interest and engagement from the developer community [23].