Workflow
氛围编程(Vibe Coding)
icon
Search documents
卡帕西宣判:Vibe Coding终结,99%代码AI接管,“智能体工程”时代开启
3 6 Ke· 2026-02-05 13:09
Core Insights - The article discusses the evolution from "Vibe Coding" to "Agentic Engineering," emphasizing that AI programming has matured from playful experimentation to a more structured and professional approach [2][3][8] - Andrej Karpathy introduces "Agentic Engineering" as a new paradigm where professionals command AI agents rather than directly writing code, highlighting the importance of architecture and engineering skills in this new landscape [8][9][20] Group 1: Transition from Vibe Coding to Agentic Engineering - "Vibe Coding" was characterized by casual, intuitive coding with limited capabilities, while "Agentic Engineering" represents a shift towards a more rigorous and quality-focused programming methodology [8][15] - The term "Agentic" reflects the role of professionals as overseers of AI, where 99% of the time they are directing AI agents rather than writing code themselves [8][9] - The article notes that the transition signifies a move from playful coding to a serious application of AI in professional settings, requiring deeper skills and understanding [15][18] Group 2: Community Response and Industry Implications - The introduction of "Agentic Engineering" has resonated within the developer community, with many expressing excitement about the new title and its implications for their work [11][13] - Comments from industry professionals indicate that the shift is not merely a rebranding but a reflection of technological maturity, where effective AI utilization is now essential [14][15] - The article highlights a spectrum effect in the industry, where junior developers may struggle with the complexities of AI programming, while experienced developers leverage their skills to enhance productivity significantly [38][41] Group 3: Skills and Strategies for Success - To excel in the AI era, developers must focus on design and management of code rather than just writing it, emphasizing the need for architectural thinking [44] - Effective communication with AI involves providing rich context and detailed instructions, which can lead to better outcomes compared to vague prompts [29][30] - The article stresses the importance of critical thinking and skepticism when overseeing AI-generated code, as this is crucial for maintaining quality and functionality [27][28] Group 4: Future Predictions - Karpathy predicts a dual evolution of model and agent layers by 2026, suggesting that this will lead to the emergence of "super individuals" capable of functioning as entire development teams [45] - The article posits that mastering "Agentic Engineering" will empower individuals to create significant projects independently, transforming the landscape of software development [45][46]
Linux祖师爷真香现场,曾嘲讽AI编程是垃圾,如今亲自下场氛围编程
3 6 Ke· 2026-01-12 07:43
Core Insights - Linus Torvalds, known as the father of Linux, has publicly acknowledged the effectiveness of AI in coding, marking a significant shift in his previously critical stance towards AI-generated code [3][20][32] - The use of AI programming tools, particularly Google's Antigravity, has been embraced by Torvalds in his personal projects, indicating a broader acceptance of AI in software development [11][14][28] Group 1: AI Programming Adoption - Torvalds has transitioned from a vocal critic of AI-generated code to utilizing it for his own projects, suggesting a paradigm shift in the programming community [20][32] - The concept of "Vibe Coding," where developers describe desired functionalities to AI rather than writing code line by line, has gained traction, with Torvalds successfully applying it in his work [11][19] - The acknowledgment of AI's capabilities by prominent figures in programming, including Torvalds, signals a potential revolution in coding practices, with predictions that by the end of 2026, 90% of code in startups may be AI-generated [28][30] Group 2: Historical Context and Implications - The evolution of AI programming tools from simple applications to essential productivity tools reflects a significant transformation in software development methodologies [29][30] - The shift in Torvalds' perspective is emblematic of a larger trend among influential tech leaders, indicating that the software development landscape is undergoing a fundamental change [19][32] - The historical significance of this shift parallels past technological revolutions, suggesting that AI programming will redefine the role of programmers, moving from traditional coding to more strategic oversight and architecture [31][32]
消息称马斯克下月将推xAI首款AI氛围编程工具Grok Build
Sou Hu Cai Jing· 2026-01-09 01:01
Core Viewpoint - Elon Musk announced an upcoming upgrade for xAI's Grok Code, which will enable users to complete complex programming tasks in a single prompt, significantly enhancing developer efficiency [1][3]. Group 1: Upgrade Features - The core feature of the upgrade is "one-shot" prompting, allowing users to input detailed instructions just once to generate complete and usable complex code solutions [3]. - xAI is likely to introduce a new tool named "Grok Build," which is considered the first "Vibe Coding" solution from xAI, facilitating a more intuitive programming experience [3]. Group 2: Industry Context - The term "Vibe Coding" refers to a collaborative programming approach using large language models (LLMs), where developers interact with AI in a fluid manner rather than focusing on syntax details [3]. - There is speculation that xAI aims to replicate the interactive model of Google AI Studio, potentially incorporating a command-line interface (CLI) to make the programming process smoother and more intuitive [4].
Cursor CEO示警“氛围编程”:盲目信赖AI写代码恐成豆腐渣工程
Sou Hu Cai Jing· 2025-12-26 06:09
Core Insights - Michael Truell, CEO of Cursor, warns against "Vibe Coding," which allows rapid code generation but risks creating unstable software foundations [1] - Truell emphasizes that while generative AI transforms programming, over-reliance on it can lead to significant technical debt [1] Group 1: Vibe Coding Concerns - "Vibe Coding" involves developers relying entirely on AI to complete tasks without understanding code details, which can be risky for complex projects [1] - Truell compares this method to building a house without understanding the underlying structure, warning that it may lead to system collapse as complexity increases [1] - Developers must retain the ability to inspect and understand the code, regardless of AI capabilities [1] Group 2: Cursor's Solution - Cursor integrates AI directly into the Integrated Development Environment (IDE), allowing it to understand existing code context and predict the next lines of code accurately [2] - This approach balances macro-level instructions with micro-level control, enabling developers to delegate tasks to AI while maintaining oversight [2] - Cursor has rapidly grown to become an industry leader with over 1 million daily active users and an annual revenue exceeding $1 billion [2] Group 3: Company Growth and Valuation - Founded by Truell and three MIT alumni, Cursor has achieved a post-funding valuation of $29.3 billion after raising $2.3 billion in a funding round completed in 2025 [2] - The company has expanded to 300 employees and received investment from the OpenAI startup fund in 2023 [2]
大模型的2025:6个关键洞察
3 6 Ke· 2025-12-23 11:39
Core Insights - The report titled "2025 LLM Year in Review" by Andrej Karpathy highlights a significant paradigm shift in the field of large language models (LLMs) from mere "probabilistic imitation" to "logical reasoning" [1][2] - The driving force behind this transition is the maturity of Reinforcement Learning with Verifiable Rewards (RLVR), which encourages models to generate reasoning traces similar to human thought processes [1][2] - Karpathy emphasizes that the potential of this new computational paradigm has yet to be fully explored, with current utilization estimated at less than 10% [2][15] Technological Developments - In 2025, RLVR emerged as the core new phase in the training stack for production-grade LLMs, allowing models to autonomously develop reasoning strategies through training in verifiable environments [4][5] - The year saw a significant extension in the training cycles of models, although the overall parameter scale remained largely unchanged [5] - The introduction of the o1 model at the end of 2024 and the o3 model in early 2025 marked a qualitative leap in LLM capabilities [5] Nature of Intelligence - Karpathy argues that LLMs should be viewed as "summoned ghosts" rather than "evolving animals," indicating a fundamental difference in their intelligence structure compared to biological entities [2][6] - The performance of LLMs exhibits a "zigzag" characteristic, excelling in advanced areas while struggling with basic common knowledge [2][8] New Applications and Trends - The rise of "Vibe Coding" and the practical trend of localized intelligent agents are discussed, indicating a shift towards more user-centric AI applications [2][9] - The emergence of tools like Cursor highlights a new application layer for LLMs, focusing on context engineering and optimizing model interactions for specific verticals [9] User Interaction and Development - The introduction of Claude Code (CC) showcases the capabilities of LLM agents, emphasizing local deployment for enhanced user interaction and access to private data [10][11] - The concept of "atmospheric programming" allows users to create powerful programs using natural language, democratizing programming skills [12][13] Future Outlook - The report suggests that the industry is on the brink of a transition from simulating human intelligence to achieving pure machine intelligence, with future competition focusing on efficient AI reasoning [2][15] - The potential for innovation in the LLM space remains vast, with many ideas yet to be explored [15]
大模型的2025:6个关键洞察
腾讯研究院· 2025-12-23 08:33
Core Insights - The article discusses a significant paradigm shift in the field of large language models (LLMs) in 2025, moving from "probabilistic imitation" to "logical reasoning" driven by the maturity of verifiable reward reinforcement learning (RLVR) [2][3] - The author emphasizes that the potential of LLMs has only been explored to less than 10%, indicating vast future development opportunities [3][25] Group 1: Technological Advancements - In 2025, RLVR emerged as the core new phase in training LLMs, allowing models to autonomously generate reasoning traces by training in environments with verifiable rewards [7][8] - The increase in model capabilities in 2025 was primarily due to the exploration and release of the "stock potential" of RLVR, rather than significant changes in model parameter sizes [8][9] - The introduction of the o1 model at the end of 2024 and the o3 model in early 2025 marked a qualitative leap in LLM capabilities [9] Group 2: Nature of Intelligence - The author argues that LLMs should be viewed as "summoned ghosts" rather than "evolving animals," highlighting a fundamental difference in their intelligence compared to biological entities [10][11] - The performance of LLMs exhibits a "sawtooth" characteristic, excelling in advanced fields while struggling with basic common knowledge [12][13] Group 3: New Applications and Interfaces - The emergence of Cursor represents a new application layer for LLMs, focusing on context engineering and optimizing prompt design for specific verticals [15] - The introduction of Claude Code (CC) demonstrated the core capabilities of LLM agents, operating locally on user devices and accessing private data [17][18] - The concept of "atmospheric programming" allows users to create powerful programs using natural language, democratizing programming skills [20][21] Group 4: Future Directions - The article suggests that the future of LLMs will involve a shift towards visual and interactive interfaces, moving beyond text-based interactions [24] - The potential for innovation in the LLM space remains vast, with many ideas yet to be explored, indicating a continuous evolution in the industry [25]
大模型的2025:6个关键洞察,来自OpenAI创始人、AI大神“AK”
3 6 Ke· 2025-12-22 04:22
Core Insights - The report by Andrej Karpathy highlights a significant paradigm shift in the field of large language models (LLMs) from "probabilistic imitation" to "logical reasoning" in 2025, driven by the maturation of Reinforcement Learning with Verifiable Rewards (RLVR) [1][2] - The industry is at a critical juncture, transitioning from "simulating human intelligence" to "pure machine intelligence," with a focus on how to make AI think efficiently rather than just competing on computational power [2][4] Group 1: Technological Advancements - RLVR has emerged as the core new phase in LLM training, allowing models to autonomously generate reasoning traces by training in environments with verifiable rewards [4][5] - The year 2025 saw a significant extension in the training cycles of LLMs, with the ability to optimize for longer reasoning traces and increased "thinking time," leading to qualitative leaps in model capabilities [5][6] Group 2: Nature of Intelligence - Karpathy argues that LLMs should be viewed as "summoned ghosts" rather than "evolving animals," indicating a fundamental difference in the nature of AI intelligence compared to biological intelligence [6][7] - The performance of LLMs exhibits a "zigzag" characteristic, excelling in specialized areas while struggling with basic common knowledge, reflecting their unique intelligence structure [8] Group 3: New Applications and Interfaces - The emergence of applications like Cursor signifies a new layer in LLM usage, focusing on context engineering and optimizing the orchestration of multiple LLM calls for specific vertical domains [9][10] - The introduction of Claude Code (CC) demonstrates the potential of LLM agents to operate locally on user devices, accessing private data and providing a new paradigm of AI interaction [10][11] Group 4: Programming and Development - The concept of "vibe coding" has gained traction, allowing individuals to create powerful programs using natural language, thus democratizing programming skills beyond trained professionals [11][12] - The shift towards atmosphere programming is expected to transform the software development ecosystem, making coding more accessible and flexible for everyday users [12][13] Group 5: Future Prospects - Despite the rapid advancements, the industry has only tapped into less than 10% of the potential of LLMs, indicating vast opportunities for future exploration and innovation [14][15] - The report emphasizes the need for foundational work to continue alongside the rapid development of LLM technologies, suggesting a sustained period of transformation ahead [14][15]
Karpathy 2025 年度盘点:o3 是真正拐点,Cursor 证明了应用层比我们想象的要厚
Founder Park· 2025-12-20 08:59
Core Insights - The article emphasizes that 2025 is an exciting year for Large Language Models (LLMs), highlighting their potential and the ongoing evolution in the field [2][3]. - It suggests that the industry has yet to realize even 10% of its potential, indicating vast opportunities for exploration and innovation [4][5]. Paradigm Shifts - The introduction of Reinforcement Learning from Verifiable Rewards (RLVR) is identified as a significant shift in LLM training, expected to become a primary component by 2025 [12]. - RLVR allows LLMs to train in environments where answers can be automatically verified, leading to improved problem-solving capabilities [14][16]. - The article notes that the performance improvements in 2025 will primarily stem from the adoption of RLVR, rather than an increase in model parameters [17]. New Applications - Cursor is highlighted as a new application layer product that demonstrates the potential for LLMs to be tailored for specific verticals, sparking discussions about the future of LLM applications [28][30]. - Claude Code is presented as a groundbreaking product that showcases LLM capabilities in a local environment, emphasizing the shift from cloud-based to local AI applications [34][36]. - Vibe Coding is introduced as a transformative concept that democratizes programming, allowing anyone to create software using natural language [38][40]. Future Models - The Gemini Nano Banana model is described as one of the most significant models of 2025, hinting at the future of LLMs and their integration with graphical user interfaces [42][46]. - The article suggests that LLMs should communicate in preferred formats such as images and visualizations, rather than just text, to enhance user interaction [44].
卡帕西2025大模型总结火爆硅谷
量子位· 2025-12-20 04:20
Core Insights - The article discusses the emerging trends in AI for 2025, highlighting the transformative impact of large models and the belief that only 10% of their potential has been realized so far [6][7]. Group 1: Key Predictions and Trends - The introduction of RLVR (Reinforcement Learning with Verified Rewards) marks a new phase in training large models, allowing them to develop reasoning strategies autonomously [8][10]. - The performance of large models is expected to exhibit a "zigzag" characteristic, indicating rapid bursts of capability as RLVR is adopted [18]. - Cursor represents a new application layer for large models, suggesting a shift towards more integrated and user-friendly AI applications [23][24]. Group 2: Innovations in AI Applications - Claude Code is identified as a significant example of a large model agent, capable of running locally on personal computers and utilizing user-specific data [26][32]. - Vibe Coding is anticipated to democratize programming, enabling non-professionals to create software through natural language [34][37]. - Nano Banana is highlighted as a groundbreaking model that integrates text generation, image generation, and world knowledge, setting a new standard for user interface and experience in AI [40][43].
Cursor估值飙升至293亿美元,四位创始人身价均超13亿美元
3 6 Ke· 2025-11-14 09:41
Group 1: Company Overview - Anysphere, the developer of the AI code editor Cursor, announced a $2.3 billion funding round, raising its valuation to $29.3 billion [1] - Cursor's core product is an AI-driven code editor that supports models from various companies, enabling automatic code writing, file editing, and error fixing [2] - The company has achieved over $100 million in annualized revenue and serves around 50,000 engineering teams, including major clients like NVIDIA, Adobe, and Uber [3] Group 2: Funding and Investment - The recent funding round was led by existing investor Accel and new investor Coatue, with participation from Google and NVIDIA [1] - The funding will be used for technology development and to enhance the in-house model Composer, which aims to reduce reliance on third-party models [3] Group 3: Strategic Acquisitions - Cursor announced the acquisition of Growth by Design Talent (GBD), a talent strategy company, to strengthen its organizational capabilities [8] - GBD has a history of building world-class teams for tech companies and will help Cursor in its rapid growth phase [9] Group 4: Founders and Wealth Growth - The four co-founders of Cursor, all under 30 and graduates of MIT, have become billionaires following the company's valuation increase [12] - Each founder holds approximately 4.5% of the company, with their net worth exceeding $1.3 billion based on the latest valuation [12]