刚刚,Cursor 2.0携自研模型Composer强势登场,不再只做「壳」
机器之心·2025-10-30 01:41

Core Insights - Cursor has officially launched its own large language model, Composer, marking a significant evolution from being a platform reliant on third-party models to becoming an AI-native platform [2][4][3] - The release of Composer is seen as a breakthrough that enhances Cursor's capabilities in coding and software development [4][3] Summary by Sections Composer Model - Composer is a cutting-edge model that, while not as intelligent as top models like GPT-5, boasts a speed that is four times faster than comparable intelligent models [6] - In benchmark tests, Composer achieved a generation speed of 250 tokens per second, which is double that of leading fast inference models and four times that of similar advanced systems [9] - The model is designed for low-latency coding tasks, with most interactions completed within 30 seconds, and early testers have found its rapid iteration capabilities to be user-friendly [11] - Composer utilizes a robust set of tools for training, including semantic search across entire codebases, significantly enhancing its ability to understand and process large codebases [12] - The model is a mixture of experts (MoE) architecture, optimized for software engineering through reinforcement learning, allowing it to generate and understand long contexts [16][19] Cursor 2.0 Update - Cursor 2.0 introduces a multi-agent interface that allows users to run multiple AI agents simultaneously, enhancing productivity by enabling agents to handle different parts of a project [21][24] - The new version focuses on an agent-centric approach rather than a traditional file structure, allowing users to concentrate on desired outcomes while agents manage the details [22] - Cursor 2.0 addresses new bottlenecks in code review and change testing, facilitating quicker reviews of agent changes and deeper code exploration when necessary [25] Infrastructure and Training - The development of large MoE models requires significant investment in infrastructure, with Cursor utilizing PyTorch and Ray to create a customized training environment for asynchronous reinforcement learning [28] - The team has implemented MXFP8 MoE kernels to train models efficiently across thousands of NVIDIA GPUs, achieving faster inference speeds without the need for post-training quantization [28] - The Cursor Agent framework allows models to utilize various tools for code editing, semantic searching, and executing terminal commands, necessitating a robust cloud infrastructure to support concurrent operations [28] Community Feedback - The major update has garnered significant attention, with early users providing mixed feedback, highlighting both positive experiences and areas for improvement [30][31]