Workflow
Cursor 2.0
icon
Search documents
惊了!AI开发不用PRD,零代码Demo跑通全流程,效率直接暴涨40%
Sou Hu Cai Jing· 2025-12-05 23:06
你发现没,现在做 AI 产品的产品经理,最怕的不是需求改来改去,而是熬了好几天写的 200 页 PRD, 开发看完还是一脸懵圈,最后做出来的东西完全不是自己想要的样子。 就像有个跨境 AI 医疗咨询项目,要做个 "AI 医生",既能跟患者聊病情,又能给医生出病历,还得查库 存开药。 产品经理闭关一周憋出超详细 PRD,结果评审会直接翻车 —— 后端纠结 "智能推荐替代药" 到底怎么 落地,医生质疑 AI 的同理心逻辑,前端吐槽流式输出看着眼晕。 这事儿真不是个例,AI 时代的产品开发早就不是过去那套玩法了。以前做 App,功能都是死规矩,点 A 跳 B 写清楚就行,PRD 很好使。 但 AI 是活的,反应没个准头,用死板的文字去定义它,就跟用说明书教人谈恋爱似的,根本讲不明 白。 好在 2025 年的 AI 编程工具彻底爆发,让 "代码即需求" 成了现实,就算不懂技术的产品经理,也能亲 手把想法变成能跑的 Demo。 一、PRD 失灵:AI 产品的通病 说句实在话,传统 PRD 在 AI 项目里失灵,核心就两点:AI 的脾气摸不准,业务流程串得没规律。 咱们先说说 AI 的 "坏脾气"。你在文档里写 "语气要 ...
老外傻眼,明用英文提问,DeepSeek依然坚持中文思考
3 6 Ke· 2025-12-03 09:14
就在前天,DeepSeek 一口气上新了两个新模型,DeepSeek-V3.2 和 DeepSeek-V3.2-Speciale。 这两大版本在推理能力上有了显著的提升,DeepSeek-V3.2 版本能和 GPT-5 硬碰硬,而 Speciale 结合长思考和定理证明能力,表现媲美 Gemini-3.0-Pro。 有读者评论说:「这个模型不应该叫 V3.2,应该叫 V4。」 海外研究者也迫不及待的用上了 DeepSeek 的新版本,在感慨 DeepSeek 推理速度显著提升之余,却又碰上了他们难以理解的事情: 哪怕在用英文询问 DeepSeek 的时候,它在思考过程中还是会切回「神秘的东方文字」。 这就把海外友人整蒙了:明明没有用中文提问,为什么模型还是会使用中文思考,难道用中文推理更好更快? 评论区有两种不同的观点,但大部分评论都认为:「汉字的信息密度更高」。 来自亚马逊的研究者也这么认为: 这个结论很符合我们日常的认知,表达相同的文本含义,中文所需的字符量是明显更少的。如果大模型理解与语义压缩相关的话,那么中文相比于广泛使 用的英文在压缩方面更有效率。或许这也是「中文更省 token」说法的来源。 具有 ...
老外傻眼!明用英文提问,DeepSeek依然坚持中文思考
机器之心· 2025-12-03 08:30
Core Insights - DeepSeek has launched two new models, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, which show significant improvements in reasoning capabilities, with the former being comparable to GPT-5 and the latter performing similarly to Gemini-3.0-Pro [1][4] - There is a notable phenomenon where DeepSeek switches to Chinese during reasoning, even when queries are made in English, leading to discussions about the efficiency of Chinese in processing information [4][6] Group 1: Model Performance - The new models exhibit enhanced reasoning speed, attracting interest from overseas researchers [1] - The comment section reflects a consensus that Chinese characters have a higher information density, requiring fewer characters to express the same meaning compared to English [4][6] Group 2: Cross-Lingual Reasoning - Research indicates that using non-English languages for reasoning can lead to better performance and reduced token consumption, as shown in the paper "EfficientXLang" [7][8] - The study found that reasoning in non-English languages can achieve a token reduction of 20-40% without sacrificing accuracy, with DeepSeek R1 showing reductions from 14.1% (Russian) to 29.9% (Spanish) [11] Group 3: Language Efficiency - Although Chinese can save reasoning token costs compared to English, it is not the most efficient language; Polish ranks highest in long-context tasks [12][14] - The performance of models varies significantly based on the language used for instructions, with English not being the top performer in long-context tasks [14][18] Group 4: Training Data Influence - The prevalence of Chinese training data in domestic models explains the tendency for these models to think in Chinese [20][21] - The phenomenon of models like OpenAI's o1-pro occasionally using Chinese during reasoning raises questions about the influence of training data composition [24][25]
Z Product | Product Hunt最佳产品(10.27-11.2),Cursor 与 Vercel 霸榜
Z Potentials· 2025-11-09 03:01
Core Insights - The article highlights the top 10 AI tools and platforms that have gained significant traction in the market, showcasing their unique features and target audiences [2][3]. Group 1: Cursor 2.0 - Cursor 2.0 is an AI code editor that integrates self-developed coding models and multi-agent collaboration, aimed at enhancing coding efficiency for developers [3][6]. - It addresses issues of fragmented AI programming assistant functionalities and inefficient collaboration by providing a unified interface [6][7]. - The product has received 689 Upvotes and 42 comments, indicating strong user interest [8]. Group 2: v0 by Vercel - v0 by Vercel is a collaborative AI full-stack development platform designed to assist development teams in application design and delivery [9][11]. - It offers real-time AI assistance for UI generation and code iteration, significantly improving front-end and back-end collaboration [11][12]. - The platform has garnered 589 Upvotes and 44 comments, reflecting its popularity [13]. Group 3: Postiz - Postiz is an AI-powered scheduling tool for social media, capable of managing content across over 20 platforms [14][16]. - It automates the scheduling process, addressing the complexities of multi-platform operations and enhancing marketing efficiency [16][18]. - The tool has achieved 561 Upvotes and 61 comments, showcasing its effectiveness [19]. Group 4: Sentra by Dodo Payments - Sentra is an automated payment and billing solution tailored for AI, SaaS, and digital products [20][21]. - It simplifies the integration of multiple payment channels and automates billing management, catering to the needs of digital product companies [21][23]. - The platform has received 531 Upvotes and 64 comments, indicating strong market interest [24]. Group 5: Superinbox - Superinbox is an AI-based email management tool designed to enhance communication efficiency for busy professionals [25][29]. - It learns user writing styles to draft replies and organizes inboxes, saving users significant time [29][31]. - The tool has garnered 513 Upvotes and 86 comments, highlighting its utility [31]. Group 6: Dynal.AI - Dynal.AI is an intelligent content generation tool focused on transforming various media into LinkedIn posts [32][33]. - It automates the content creation process, helping users maintain a strong social media presence [33][34]. - The platform has achieved 461 Upvotes and 80 comments, reflecting its appeal [35]. Group 7: Parallax by Gradient - Parallax is a distributed AI computing platform that enables users to build multi-device AI clusters [37][40]. - It addresses the limitations of single-device deployments by allowing collaborative work across different hardware [40][41]. - The platform has received 444 Upvotes and 60 comments, indicating user interest [43]. Group 8: Base44 - Base44 is a no-code application building platform that integrates intelligent search and automation [44][45]. - It allows non-technical users to create applications quickly and intuitively, reducing the reliance on traditional coding [45][47]. - The platform has garnered 448 Upvotes and 21 comments, showcasing its effectiveness [48]. Group 9: Animation Builder by Unicorns Club - Animation Builder is a free tool that helps entrepreneurs create animated videos to showcase milestones [49][50]. - It simplifies the content creation process, enhancing visibility and engagement on social media platforms [50][51]. - The tool has achieved 439 Upvotes and 69 comments, reflecting its popularity [52]. Group 10: Peakflo AI Voice Agents - Peakflo AI Voice Agents are intelligent voice assistants designed to automate business calls [53][54]. - They enhance customer communication efficiency and automate operational processes, reducing costs and errors [54][55]. - The platform has received 438 Upvotes and 79 comments, indicating strong market interest [56].
微软瞄准“超级智能”新赛道,科创AIETF(588790)回调迎布局时点
Xin Lang Cai Jing· 2025-11-07 03:08
Core Insights - The Shanghai Stock Exchange Sci-Tech Innovation Board Artificial Intelligence Index has decreased by 1.86% as of November 7, 2025, with notable declines in stocks such as Fudan Microelectronics and Chipone Technology [3] - Microsoft is pursuing a more advanced form of AI called "superintelligence," aiming for breakthroughs in fields like medicine and materials science, led by Mustafa Suleyman [3] - The AI programming platform Cursor has upgraded to version 2.0, introducing a self-developed model called Composer, which significantly enhances coding efficiency and speed [4] Market Performance - The Sci-Tech AI ETF (588790) has seen a decline of 1.88%, with the latest price at 0.78 yuan, but has accumulated a 22.73% increase over the past three months [3] - The Sci-Tech AI ETF has experienced a significant growth in scale, increasing by 32.46 billion yuan over the past six months, ranking it among the top 10 comparable funds [4] - The ETF's recent weekly share growth was 14.4 million shares, placing it second among comparable funds [5] Fund Flow and Composition - The latest net outflow for the Sci-Tech AI ETF was 23.71 million yuan, but it has seen net inflows on four out of the last five trading days, totaling 140 million yuan [5] - The index tracks 30 large-cap stocks that provide foundational resources and technology for the AI sector, with the top ten stocks accounting for 70.92% of the index [5]
AI News: 1x Neo Robot, Extropic TSU, Minimax M2, Cursor 2, and more!
Matthew Berman· 2025-10-30 20:16
Robotics & Automation - 1X's Neo robot is available for pre-order at $20,000 or $4.99% per month, with availability expected in early 2026 [1][2] - Neo weighs 66 pounds and can lift 150 pounds, featuring 22 degrees of hand movement and operating at 22 dB [2][3] - The promise of humanoid robots is to be autonomous and run 24 hours a day [4] Computing & AI - Extropic is developing a thermodynamic computing platform (TSU) that claims to be up to 10,000 times more efficient than traditional CPUs and GPUs [7][8] - Miniax's M2, an open-source model from China, achieved a new high intelligence score with only 10 billion active parameters out of a 200 billion total [10] - IBM released Granite 4.0% Nano, a family of small language models (LLMs) with 1.5 billion and 350 million parameters, designed for edge and on-device applications [19][20] - Cursor 2.0% introduces Composer, a faster model for low-latency agentic coding, and a multi-agent interface [26][27] Semiconductor Industry - Substrate, a US-based startup, is building a next-generation foundry using advanced X-ray lithography to enable features printed at the 2 nanometer node and below [30][31] Corporate Strategy & Employment - Nvidia took a billion-dollar stake in Nokia, leading to a 22% increase in Nokia's shares, and the companies are partnering to develop 6G technology [17] - Amazon is undergoing layoffs of 14,000 corporate employees, partly attributed to efficiency gains from AI, but also seen as a correction for overhiring [34][37] - Tesla could potentially leverage the compute power of its idle cars, estimated at 1 kilowatt per car, to create a giant distributed inference fleet [23][24]
老黄亲自站台,英伟达编程神器,Cursor 2.0自研模型狂飙4倍
3 6 Ke· 2025-10-30 07:33
Core Insights - Cursor has launched its self-developed coding model, Composer, which is reported to be four times faster than comparable models, designed for low-latency intelligent coding tasks that can be completed in under 30 seconds [1][6][9]. Group 1: Product Features - Composer achieves a speed of 200 Tokens per second and allows for the parallel operation of up to eight intelligent agents, utilizing git worktrees or remote machines to prevent file conflicts [2][6]. - The update introduces a new code review feature that simplifies the process of viewing changes across multiple files without switching back and forth [3]. - A voice mode has been added, enabling voice-activated programming, along with improvements in context-aware copy/paste prompts [5][6]. Group 2: Market Position and Strategy - Cursor, valued at over $10 billion, has historically relied on external models like Claude, which limited its innovation and profitability. The release of Composer marks a strategic shift towards self-reliance in AI model development [6][22]. - The recent updates indicate a move away from dependence on external models, with Composer being tested alongside open-source alternatives rather than proprietary models like GPT and Claude [22][30]. Group 3: User Experience and Feedback - Early testers have reported that Cursor 2.0 is significantly faster, with results generated in mere seconds, enhancing the overall user experience [16][26]. - Some developers have expressed that while Composer is fast, its intelligence may not match that of competitors like Sonnet 4.5 and GPT-5, indicating a competitive landscape in AI programming tools [30][34].
Cursor 2.0来了,多agent并行,自研模型30秒跑完多数任务,MXFP8训练
3 6 Ke· 2025-10-30 04:35
Core Insights - Cursor has announced the upgrade to version 2.0, introducing its self-developed programming model, Composer, and 15 other enhancements aimed at improving the programming experience with AI agents [1][41]. Group 1: Model Performance - The Composer model is designed for low-latency Agentic programming, achieving speeds four times faster than comparable intelligent models, with a token output exceeding 200 tokens per second [1]. - Internal evaluations indicate that Composer surpasses leading open-source programming models in intelligence and outperforms lightweight models in speed, although it still lags behind GPT-5 and Claude Sonnet 4.5 in intelligence [1][3]. Group 2: User Interface Enhancements - The UI of Cursor 2.0 has been redesigned to focus on agents rather than files, allowing developers to concentrate on specific goals and enabling up to 8 agents to run in parallel without interference [3][7]. - A new native browser feature allows agents to automatically test their work and iterate until correct results are produced, enhancing the user experience by enabling direct modifications to web elements [5][10]. Group 3: Code Review and Management - The code review functionality has been improved to aggregate all modifications into a single interface, eliminating the need to switch between files [13]. - Team command features have been introduced, allowing team leaders to set custom commands and rules that automatically apply to all members, streamlining management [19][24]. Group 4: Performance and Reliability - Cursor's cloud agents now boast a reliability rate of 99.9%, with improvements in the user interface for sending agents to the cloud [28]. - The performance of code execution has been enhanced, particularly for Python and TypeScript, with dynamic memory allocation based on available RAM [22]. Group 5: Self-Developed Model Insights - The Composer model is a mixture of experts (MoE) model that supports long-context generation and understanding, optimized through reinforcement learning for software engineering tasks [31][35]. - Cursor's training infrastructure has been customized to support asynchronous reinforcement learning at scale, utilizing low-precision training methods to enhance efficiency [40]. Group 6: Future Implications - The advancements in Cursor's self-developed models indicate a strategic shift towards reducing reliance on external models, potentially positioning the company favorably in the competitive landscape of AI IDEs [41].
刚刚,Cursor 2.0携自研模型Composer强势登场,不再只做「壳」
机器之心· 2025-10-30 01:41
Core Insights - Cursor has officially launched its own large language model, Composer, marking a significant evolution from being a platform reliant on third-party models to becoming an AI-native platform [2][4][3] - The release of Composer is seen as a breakthrough that enhances Cursor's capabilities in coding and software development [4][3] Summary by Sections Composer Model - Composer is a cutting-edge model that, while not as intelligent as top models like GPT-5, boasts a speed that is four times faster than comparable intelligent models [6] - In benchmark tests, Composer achieved a generation speed of 250 tokens per second, which is double that of leading fast inference models and four times that of similar advanced systems [9] - The model is designed for low-latency coding tasks, with most interactions completed within 30 seconds, and early testers have found its rapid iteration capabilities to be user-friendly [11] - Composer utilizes a robust set of tools for training, including semantic search across entire codebases, significantly enhancing its ability to understand and process large codebases [12] - The model is a mixture of experts (MoE) architecture, optimized for software engineering through reinforcement learning, allowing it to generate and understand long contexts [16][19] Cursor 2.0 Update - Cursor 2.0 introduces a multi-agent interface that allows users to run multiple AI agents simultaneously, enhancing productivity by enabling agents to handle different parts of a project [21][24] - The new version focuses on an agent-centric approach rather than a traditional file structure, allowing users to concentrate on desired outcomes while agents manage the details [22] - Cursor 2.0 addresses new bottlenecks in code review and change testing, facilitating quicker reviews of agent changes and deeper code exploration when necessary [25] Infrastructure and Training - The development of large MoE models requires significant investment in infrastructure, with Cursor utilizing PyTorch and Ray to create a customized training environment for asynchronous reinforcement learning [28] - The team has implemented MXFP8 MoE kernels to train models efficiently across thousands of NVIDIA GPUs, achieving faster inference speeds without the need for post-training quantization [28] - The Cursor Agent framework allows models to utilize various tools for code editing, semantic searching, and executing terminal commands, necessitating a robust cloud infrastructure to support concurrent operations [28] Community Feedback - The major update has garnered significant attention, with early users providing mixed feedback, highlighting both positive experiences and areas for improvement [30][31]
Cursor发布首个编程大模型!代码生成250tokens/秒,强化学习+MoE架构
量子位· 2025-10-30 01:06
Core Insights - Cursor has officially released its first in-house coding model, named Composer, as part of the Cursor 2.0 update [1][2] - Composer is reported to complete complex tasks in just 30 seconds, achieving a speed increase of 400% compared to competitors [3][12] Model Features - The new Cursor 2.0 includes a native browser tool that allows the model to test, debug, and iterate code autonomously until achieving correct results [4] - Voice code generation enables users to convert their thoughts into code without typing [5] - The interface has shifted from a file-centric to an agent-centric model, allowing multiple agents to run simultaneously without interference [6][7] Performance Metrics - Composer generates code at a speed of 250 tokens per second, which is approximately twice as fast as the current leading models like GPT-5 and Claude Sonnet 4.5 [19][20] - The model demonstrates enhanced reasoning and task generalization capabilities, comparable to mid-tier leading models [21] Training Methodology - Composer's performance is attributed to reinforcement learning, which allows the model to learn from real programming tasks rather than static datasets [22][26] - The training process involves the model working directly within a complete codebase, utilizing production-level tools to write, test, and debug code [27][28] Practical Application - Cursor 2.0 is designed to provide a practical AI system that aligns closely with developers' daily workflows, enhancing its usability in real-world scenarios [35][36] - The model has shown emergent behaviors, such as running unit tests and autonomously fixing code format errors [31] Transparency and Model Origin - There are concerns regarding the transparency of Composer's foundational model, with questions about whether it is based on pre-existing models or entirely self-trained [37][40] - Cursor has previously developed an internal model named Cheetah, which was used for testing speed and system integration [42]