提示工程

Search documents
当所有人都在学提示工程时,聪明人却专注于掌握这项技能
3 6 Ke· 2025-08-02 00:32
神译局是36氪旗下编译团队,关注科技、商业、职场、生活等领域,重点介绍国外的新技 术、新观点、新风向。 编者按:当全民沉迷提示工程时,顶尖人才正抢占"AI翻译层":将机器洞察转化为人类决策的双向沟通 力,正成为比技术操作稀缺10倍的核心竞争力。文章来自编译。 我认识的每一位职场人士都在学习提示工程(prompt engineering)。 他们上有关ChatGPT的课程、学习机器学习基础,努力成为"懂AI的人"。主流观点认为要想应对即将到 来的职场变革,必须具备AI技术能力。领英上铺天盖地都是关于提示词优化和精通AI工具的帖子。 但他们忽略了一个关键点。 当其他人争先恐后想成为更优秀的AI操作者时,一小群专业人士正悄然布局更有价值的领域。他们不 去学习如何跟AI竞争,而是学习如何让AI服务人类。在这个充斥着无人懂得如何落地的AI洞察的世界 里,这些人正变得不可替代。 他们掌握的核心技能是什么?沟通能力。且非泛泛的沟通,而是将AI输出转化为人类可理解、可行动 的专项能力。 问题症结:AI制造的困惑多于让人清晰 AI的发展速度超出了大多数人的想象。 如今的系统能在数秒内分析海量数据集、生成详尽报告并给出复杂建议。 ...
「幻觉」竟是Karpathy十年前命名的?这个AI圈起名大师带火了多少概念?
机器之心· 2025-07-28 10:45
Core Viewpoint - The article discusses the influential contributions of Andrej Karpathy in the AI field, particularly his role in coining significant terms and concepts that have shaped the industry, such as "hallucinations," "Software 2.0," "Software 3.0," "vibe coding," and "bacterial coding" [1][6][9]. Group 1: Naming and Concepts - Karpathy coined the term "hallucinations" to describe the limitations of neural networks, which generate meaningless content when faced with unfamiliar concepts [1][3]. - He is recognized as a master of naming in the AI community, having introduced terms like "Software 2.0" and "Software 3.0," which have gained traction over the years [6][9]. - The act of naming is emphasized as a foundational behavior in knowledge creation, serving as a stable target for global scientific focus [7]. Group 2: Software Evolution - "Software 1.0" refers to traditional programming where explicit instructions are written in languages like Python and C++ [12][14]. - "Software 2.0" represents a shift to neural networks, where developers train models using datasets instead of writing explicit rules [15]. - "Software 3.0" allows users to generate code through simple English prompts, making programming accessible to non-developers [16][17]. Group 3: Innovative Programming Approaches - "Vibe coding" encourages developers to immerse themselves in the development atmosphere, relying on LLMs to generate code based on verbal requests [22][24]. - "Bacterial coding" promotes writing modular, self-contained code that can be easily shared and reused, inspired by the adaptability of bacterial genomes [30][35]. - Karpathy suggests balancing the flexibility of bacterial coding with the structured approach of eukaryotic coding to support complex system development [38]. Group 4: Context Engineering - Context engineering has gained attention as a more comprehensive approach than prompt engineering, focusing on providing structured context for AI applications [43][44]. - The article highlights a shift towards optimizing documentation for AI readability, indicating a trend where 99.9% of content may be processed by AI in the future [45].
梳理了1400篇研究论文,整理了一份全面的上下文工程指南 | Jinqiu Select
锦秋集· 2025-07-21 14:03
Core Insights - The article discusses the emerging field of Context Engineering, emphasizing the need for a systematic theoretical framework to complement practical experiences shared by Manus' team [1][2] - A comprehensive survey titled "A Survey of Context Engineering for Large Language Models" has been published, analyzing over 1400 research papers to establish a complete technical system for Context Engineering [1][2] Context Engineering Components - Context Engineering is built on three interrelated components: Information Retrieval and Generation, Information Processing, and Information Management, forming a complete framework for optimizing context in large models [2] - The first component, Context Retrieval and Generation, focuses on engineering methods to effectively acquire and construct context information for models, including practices like Prompt Engineering, external knowledge retrieval, and dynamic context assembly [2] Prompting Techniques - Prompting serves as the starting point for model interaction, where effective prompts can unlock deeper capabilities of the model [3] - Zero-shot prompting provides direct instructions relying on pre-trained knowledge, while few-shot prompting offers a few examples to guide the model in understanding task requirements [4] Advanced Reasoning Frameworks - For complex tasks, structured thinking is necessary, with Chain-of-Thought (CoT) prompting models to think step-by-step, significantly improving accuracy in complex tasks [5] - Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT) further enhance reasoning by allowing exploration of multiple paths and dependencies, improving success rates in tasks requiring extensive exploration [5] Self-Refinement Mechanisms - Self-Refinement allows models to iteratively improve their outputs through self-feedback without requiring additional supervised training data [8][9] - Techniques like N-CRITICS and Agent-R enable models to evaluate and correct their reasoning paths in real-time, enhancing output quality [10][11] External Knowledge Retrieval - External knowledge retrieval, particularly through Retrieval-Augmented Generation (RAG), addresses the static nature of model knowledge by integrating dynamic information from external databases [12][13] - Advanced RAG architectures introduce adaptive retrieval mechanisms and hierarchical processing strategies to enhance information retrieval efficiency [14][15] Context Processing Challenges - Processing long contexts presents significant computational challenges due to the quadratic complexity of Transformer self-attention mechanisms [28] - Innovations like State Space Models and Linear Attention aim to reduce computational complexity, allowing models to handle longer sequences more efficiently [29][30] Context Management Strategies - Effective context management is crucial for organizing, storing, and utilizing information, addressing issues like context overflow and collapse [46][47] - Memory architectures inspired by operating systems and cognitive models are being developed to enhance the memory capabilities of language models [48][50] Tool-Integrated Reasoning - Tool-Integrated Reasoning transforms language models from passive text generators into active agents capable of interacting with the external world through function calling and integrated reasoning frameworks [91][92]
黄仁勋:每天都在用AI,提示工程可以提高认知水平
量子位· 2025-07-16 04:21
时令 发自 凹非寺 量子位 | 公众号 QbitAI 我每天都使用AI,我认为提示工程是一项高级认知技能。 说这话的,正是身价刚刚超过巴菲特的 黄仁勋 。 他还表示,人们对人工智能会消灭工作岗位的担忧被夸大了,但这并不意味着工作方式不会发生巨大变化。 他百分之百肯定,每个人的工作都会发生变化。 此言出自老黄在CNN(美国有线电视新闻网)的最新访谈。 此外,他还在访谈中提及了中国市场的重要性。 值得一提的是,黄仁勋在接受央视采访时宣布最新进展: 1、H20已被批准销往中国市场:这是个非常、非常好的消息; 2、将发布新显卡RTX Pro:这款显卡非常重要,专为计算机图形、数字孪生和AI设计。 通过大规模减少任务重塑工作 黄仁勋相信AI将重塑几乎所有工作岗位——不是通过大规模失业,而是通过大规模的任务削减和重构。 有些工作会消失,但也会创造出很多新的岗位。我希望,各行各业因人工智能带来的生产力提升,最终能够推动整个社会的发展。 我并不是让它替我思考,而是让它教我那些我还不了解的知识,或者帮助我解决那些我自己难以合理解决的问题。 他认为,向AI发出有效提示本身就是一项技能,既需要认知上的努力,也需要表达的清晰度。 作 ...
上下文就是一切!行业热议话题:提示工程是否应该改名
歸藏的AI工具箱· 2025-06-26 11:40
Core Viewpoint - The article discusses the emerging concept of "context engineering" in AI, suggesting it is a more accurate term than "prompt engineering" to describe the skills needed for effectively utilizing large language models (LLMs) [1][2]. Group 1: Importance of Context Engineering - Context engineering is essential for optimizing the performance of AI agents, as insufficient context can lead to inconsistent actions among sub-agents and hinder the ability to follow instructions accurately [4][5]. - The performance of LLMs can decline if the context is too long or contains irrelevant information, which can also increase costs and delays [4][5]. - Instruction adherence is crucial for agents, with top models showing a significant drop in accuracy during multi-turn conversations, highlighting the need for optimized context length and accuracy [4][5]. Group 2: Strategies for Optimizing Context Engineering - Context engineering encompasses three common strategies: compression, persistence, and isolation [5][6]. - Compression aims to retain only the most valuable tokens in each interaction, with methods like context summarization being critical [6][7]. - Persistence involves creating systems for storing, saving, and retrieving context over time, considering storage methods, saving strategies, and retrieval processes [9][10]. - Isolation focuses on managing context across different agents or environments, utilizing structured runtime states to control what LLMs see in each interaction [16][18]. Group 3: Practical Experiences and Recommendations - The article emphasizes the importance of building robust context management systems for AI agents, balancing performance, cost, and accuracy [24]. - It suggests that memory systems should be simple and track specific agent preferences over time, while also considering parallelizable tasks for multi-agent architectures [26]. - The need for a token tracking mechanism is highlighted as foundational for any context engineering work [23].
速递| 下一代十亿级AI创意藏于系统提示词,Superblocks完成A轮融资2300万美元
Z Potentials· 2025-06-08 03:04
图片来源: Superblocks 企业级低代码开发平台 Superblocks 的 CEO 布拉德·梅内塞斯认为,下一批价值十亿美元的创业点子 几乎就藏在眼前:现有 AI 独角兽企业所使用的系统提示词中。 Superblocks 上周宣布完成 2300 万美元 A 轮扩展融资, 使其 A 轮总融资额达到 6000 万美元 ,该公 司的 vibe coding 工具主要面向企业非开发人员。 系统提示词是 AI 初创公司用于指导 OpenAI 或 Anthropic 等公司的基础模型如何生成其应用级 AI 产 品的长篇提示词——通常超过 5000-6000 字。在 Menezes 看来,这些提示词堪称提示工程的 " 大师课 " 。 每家公司对同一个 基础 模型使用的系统提示词都完全不同, " 他说 。 " 他们试图让模型完全按照特 定领域、特定任务的要求来运作。 系统提示词并非完全保密。客户可以要求许多 AI 工具分享它们的提示词。但这些提示词并不总是公 开可用。 因此,作为其初创公司新推出的企业级编程 AI 助手 Clark 产品发布的一部分, Superblocks 主动提出 分享 19 个系统提示词文件 ...
5 万行代码 Vibe Coding 实践复盘:最佳实践、关键技术,Bitter Lesson
海外独角兽· 2025-06-05 11:00
Core Viewpoint - The article discusses the transformative potential of AI coding agents, highlighting their ability to generate code and automate programming tasks, thus enabling even those without extensive coding experience to become proficient developers [3][6]. Group 1: My Vibe Coding Journey - Vibe Coding refers to the practice of using coding agents to generate nearly 100% of the code, with tools like Cursor, Cline, and GitHub Copilot being popular choices [7]. - The author completed approximately 50,000 lines of code over three months, successfully developing three different products, demonstrating the effectiveness of AI in coding [8][9]. - The experience revealed that a lack of prior knowledge in certain programming languages can be advantageous when relying on AI, as it necessitates full dependence on the coding agent [8]. Group 2: Key Technologies of Coding Agents - Key coding agents include Cursor, Cline, GitHub Copilot, and Windsurf, with a strong emphasis on using the agent mode for optimal performance [13][14]. - The effectiveness of coding agents relies on three critical components: a powerful AI model, sufficient context, and an efficient toolchain [15][18]. - The article emphasizes the importance of providing clear and comprehensive context to the AI for successful task execution [11][12]. Group 3: Comparison of Coding Agents - Cursor is highlighted as the current leader in the coding agent space, particularly when using the Claude 3.7 Max model, capable of generating 100% of the code for large projects [44]. - Cline is noted for its open-source nature and superior support for the Model Context Protocol (MCP), but it lacks semantic search capabilities, which limits its effectiveness in handling large codebases [45]. - GitHub Copilot is seen as lagging behind in context management and MCP support, but it has the potential to catch up due to Microsoft's strong development capabilities [46]. Group 4: The Bitter Lesson in Agent Development - The article references "The Bitter Lesson," which suggests that embedding too much human experience into AI systems can limit their potential, advocating for a design that allows AI capabilities to dominate [47][48]. - The author’s experience indicates that reducing human input in favor of AI-driven processes can significantly enhance product performance, achieving a test coverage rate of over 99% [48].
“由 AI 生成的代码,从诞生那一刻起就是「遗留代码」!”
AI科技大本营· 2025-05-12 10:25
【编者按】如今生成式 AI 逐渐融入软件开发流程,越来越多 AI 生成的代码出现在实际工程中——但你有没有想过,这些由 AI 写出来的代码,从一开始 就可能被视为"遗留代码"?本文作者从工程经验出发,结合 AI 的生成机制,提出一个颇具启发性的观点: AI 生成的代码缺乏上下文记忆和维护连续性, 因此一 诞生就处于"他人旧作"的状态 。 这 不仅是对当前 AI 编码能力的冷静观察,也为我们理解未来软件开发形态提供了一种新视角。 原文链接: https://text-incubation.com/AI+code+is+legacy+code+from+day+one 翻译 | 郑丽媛 出品 | CSDN(ID:CSDNnews) 在软件开发中,代码的"可改进性"往往取 决于其所处的生命周期阶段。通常可以分为以下几类情况: 总的来看, 代码的演进速度,通常取决于离它的编写时间有多近、维护者是不是原作者。 其实 , 这种状态是合理的:对于一个运行稳定、经过验证的软件系统而言,贸然进行"改进"往往带来额外风险,尤其是当你对系统的整体脉络不甚了 解时,原作者通常才最清楚其潜在逻辑和 开发 背景。 AI 生成的代码 , ...
AI提示词终极指南:掌握这些技巧,让输出效果翻倍
3 6 Ke· 2025-05-11 02:04
Group 1 - The article emphasizes the importance of asking precise questions to unlock the potential of AI, suggesting that the quality of prompts directly influences the quality of AI outputs [1][4][30] - It introduces a set of principles for constructing better AI prompts, highlighting that anyone can improve their interactions with AI by adjusting their input methods [4][29] - The article categorizes prompts into two main types: directive prompts for clear tasks and conversational prompts for brainstorming or creative exploration [5][7] Group 2 - Key characteristics of effective prompts include clarity, context, and strong purpose, with specific instructions leading to higher quality outputs [5][6][31] - Providing background information and context is crucial for guiding AI responses, as it helps the AI understand the task better [11][31] - The article suggests breaking down complex tasks into smaller steps to enhance AI performance, as AI works best with clear, step-by-step instructions [22][31] Group 3 - Iteration is highlighted as a key strategy, encouraging users to refine their prompts based on initial outputs to achieve better results [23][28] - Role-playing techniques can significantly improve AI responses, as assigning specific roles to AI can lead to more relevant and tailored outputs [24][31] - The article advocates for testing and tracking prompts to identify effective strategies and build a personal library of successful prompts for future use [27][32]