提示词工程

Search documents
OpenAI推出学习模式,AI教师真来了?
Hu Xiu· 2025-07-30 01:45
Core Insights - OpenAI has introduced a significant update to ChatGPT called Study Mode, which is designed to enhance the learning experience for users by guiding them through problem-solving rather than just providing answers [1][2]. Group 1: Features of Study Mode - In Study Mode, ChatGPT acts as a mentor, using Socratic questioning and hints to encourage active learning and deeper understanding [3][4]. - The mode is accessible to free users and has received positive feedback for its interactive prompts and structured responses that reduce cognitive load [4][6]. - The system is tailored to individual users based on their skill levels and previous interactions, providing personalized support [4][12]. Group 2: Educational Approach - The underlying framework of Study Mode is developed in collaboration with educators and experts, focusing on core behaviors that promote deeper learning, such as encouraging participation and managing cognitive load [12]. - Key instructional strategies include checking for understanding, reinforcing concepts, and using varied pacing to maintain engagement [20][24]. - The mode emphasizes collaboration with users to help them discover answers rather than providing direct solutions, fostering a more interactive learning environment [25][26].
刚刚,OpenAI推出学习模式,AI教师真来了,系统提示词已泄露
3 6 Ke· 2025-07-30 01:37
Core Insights - OpenAI has introduced a significant update to ChatGPT called Study Mode, which aims to assist users in problem-solving step by step rather than just providing direct answers [1][2]. Features and Characteristics - **Interactive Prompts**: The Study Mode employs Socratic questioning and hints to encourage active learning, rather than simply delivering answers [2]. - **Scaffolding Responses**: Information is organized into easily digestible sections, highlighting key connections between topics to reduce cognitive load [2]. - **Personalized Support**: The mode tailors courses based on users' skill levels and previous interactions, enhancing the learning experience [2]. - **Knowledge Testing**: It includes quizzes and open-ended questions with personalized feedback to track progress and reinforce knowledge [2]. - **Flexibility**: Users can easily switch to Study Mode during conversations, allowing for adaptable learning objectives [2]. Implementation and Design - OpenAI collaborated with educators and experts to develop a custom system of instructions that promote deeper learning behaviors, such as encouraging participation and managing cognitive load [10]. - The system prompts are designed to help users discover answers through guidance rather than direct solutions [13][15]. User Experience - Users can utilize Study Mode for various educational purposes, including homework assistance and exam preparation [4]. - The mode begins by assessing the user's understanding of the topic before providing tailored instructional support [6].
刚刚,OpenAI推出学习模式,AI教师真来了,系统提示词已泄露
机器之心· 2025-07-30 00:48
Core Viewpoint - ChatGPT has introduced a new feature called Study Mode, which aims to enhance user learning by guiding them through problem-solving rather than simply providing answers [1][2][4]. Summary by Sections Features of Study Mode - The Study Mode includes interactive prompts that encourage active learning through Socratic questioning and hints, rather than direct answers [5]. - Responses are organized into understandable sections, highlighting key connections between topics to reduce cognitive load [5]. - The mode offers personalized support tailored to the user's skill level and previous interactions [5]. - Knowledge assessments, including quizzes and open-ended questions, are provided to track progress and reinforce learning [5]. - Users can easily switch to Study Mode during conversations, allowing for flexible learning objectives [5]. User Experience - Initial feedback on the Study Mode has been overwhelmingly positive, indicating its effectiveness in enhancing the learning experience [6]. - A practical example demonstrated how ChatGPT assesses the user's understanding before tailoring the teaching approach to their knowledge level [9]. Development Insights - OpenAI has collaborated with educators and experts to create a system of prompts that support deeper learning behaviors, such as encouraging active participation and providing actionable feedback [13]. - The underlying principles of the Study Mode are based on extensive research in learning sciences [13]. Prompt Engineering - OpenAI has openly shared the key components of the system prompts used in Study Mode, emphasizing the importance of understanding user goals and building on existing knowledge [16][17][18]. - The approach focuses on guiding users through questions and prompts rather than providing direct answers, fostering a collaborative learning environment [19][22].
Karpathy:我不是要造新词,是「上下文工程」对 Agent 来说太重要了
Founder Park· 2025-07-04 13:10
Core Viewpoint - The concept of "Context Engineering" has gained traction in the AI industry, emphasizing that the effectiveness of AI applications relies more on the quality of context provided than on the prompts used to query the AI [1][3]. Group 1: Definition and Importance of Context Engineering - Context Engineering is defined as the discipline of designing and constructing dynamic systems that provide appropriate information and tools to large language models (LLMs) at the right time and in the right format [19]. - The quality of context provided to an AI agent is crucial for its effectiveness, surpassing the complexity of the code or framework used [24]. - A well-constructed context can significantly enhance the performance of AI agents, as demonstrated by examples where rich context leads to more relevant and useful responses [25]. Group 2: Components of Context Engineering - Context Engineering encompasses various elements, including prompt engineering, current state or dialogue history, long-term memory, and retrieval-augmented generation (RAG) [15][11]. - The distinction between prompts, prompt engineering, and context engineering is clarified, with prompts being the immediate instructions given to the AI, while context engineering involves a broader system that dynamically generates context based on task requirements [15][19]. Group 3: Strategies for Implementing Context Engineering - Four common strategies for implementing Context Engineering are identified: writing context, selecting context, compressing context, and isolating context [26]. - Writing context involves saving information outside the context window to assist the agent in completing tasks, such as maintaining a calendar or email history [28][29]. - Selecting context refers to pulling necessary information into the context window to aid the agent, which can include filtering relevant memories or examples [36][38]. - Compressing context focuses on retaining only the essential tokens needed for task execution, often through summarization techniques [43][44]. - Isolating context involves distributing context across multiple agents or using environments to manage context effectively, enhancing task focus and reducing token consumption [47][50].
登上热搜!Prompt不再是AI重点,新热点是Context Engineering
机器之心· 2025-07-03 08:01
Core Viewpoint - The article emphasizes the importance of "Context Engineering" as a systematic approach to optimize the input provided to Large Language Models (LLMs) for better output generation [3][11]. Summary by Sections Introduction to Context Engineering - The article highlights the recent popularity of "Context Engineering," with notable endorsements from figures like Andrej Karpathy and its trending status on platforms like Hacker News and Zhihu [1][2]. Understanding LLMs - LLMs should not be anthropomorphized; they are intelligent text generators without beliefs or intentions [4]. - LLMs function as general, uncertain functions that generate new text based on provided context [5][6][7]. - They are stateless, requiring all relevant background information with each input to maintain context [8]. Focus of Context Engineering - The focus is on optimizing input rather than altering the model itself, aiming to construct the most effective input text to guide the model's output [9]. Context Engineering vs. Prompt Engineering - Context Engineering is a more systematic approach compared to the previously popular "Prompt Engineering," which relied on finding a perfect command [10][11]. - The goal is to create an automated system that prepares comprehensive input for the model, rather than issuing isolated commands [13][17]. Core Elements of Context Engineering - Context Engineering involves building a "super input" toolbox, utilizing various techniques like Retrieval-Augmented Generation (RAG) and intelligent agents [15][19]. - The primary objective is to deliver the most effective information in the appropriate format at the right time to the model [16]. Practical Methodology - The process of using LLMs is likened to scientific experimentation, requiring systematic testing rather than guesswork [23]. - The methodology consists of two main steps: planning from the end goal backward and constructing from the beginning forward [24][25]. - The final output should be clearly defined, and the necessary input information must be identified to create a "raw material package" for the system [26]. Implementation Steps - The article outlines a rigorous process for building and testing the system, ensuring each component functions correctly before final assembly [30]. - Specific testing phases include verifying data interfaces, search functionality, and the assembly of final inputs [30]. Additional Resources - For more detailed practices, the article references Langchain's latest blog and video, which cover the mainstream methods of Context Engineering [29].
论坛| 未可知 x 容诚: AI技术在基金行业的创新应用与效率提升之道
未可知人工智能研究院· 2025-07-02 12:01
Core Viewpoint - The application of AI technology in the fund industry is transforming operations and enhancing efficiency, with a focus on generative AI and its capabilities in content production and task completion [1][4]. Group 1: AI Technology Development - The evolution of AI technology has been systematically reviewed, highlighting the essential differences between generative AI and traditional decision-making AI [4]. - Generative AI, represented by tools like DeepSeek and Sora, is reshaping content production methods and enabling a leap from "answering questions" to "completing tasks" [4]. Group 2: Specific Applications in the Fund Industry - Three main directions for efficiency improvement in the fund industry have been identified: 1. High efficiency in information processing, with tools like Secret Tower AI reducing information collection time by 80% [6]. 2. Automation of content production, utilizing prompt engineering to quickly generate marketing copy and presentations [6]. 3. Intelligent business processes, where digital employees can accurately perform repetitive tasks such as net asset value verification [6]. - A case study from a large fund company demonstrated that deploying RPA digital employees led to the automation of most operational processes, saving over 4,000 work hours annually [6]. Group 3: Current State of AI Development in China - The challenges of computational power bottlenecks in China's AI development were acknowledged, alongside the unique advantages of domestic models [8]. - DeepSeek's open-source strategy and low-cost training characteristics provide a cost-effective AI transformation solution for financial institutions [8]. - Emphasis was placed on the importance of data security, with recommendations for localized deployment to address privacy concerns [8]. Group 4: Future Trends and Initiatives - A series of AI training courses will be launched to help financial institutions cultivate AI talent, emphasizing that the next decade will be a golden period for human-machine collaboration [13]. - Institutions that can build "AI employee" teams early will gain a competitive edge in the industry [13]. - The presentation provided a clear roadmap for the digital transformation of the fund industry, combining theoretical insights with practical value [13].
提示词工程、RAG之后,LangChain:上下文工程开始火了!
机器之心· 2025-06-25 04:06
Core Viewpoint - Context engineering is emerging as a crucial skill for AI engineers, shifting the focus from traditional prompt engineering to providing structured and dynamic context for large language models (LLMs) to perform tasks effectively [3][7][15]. Group 1: Definition and Importance of Context Engineering - Context engineering involves constructing dynamic systems that provide accurate information and tools in the right format, enabling LLMs to complete tasks effectively [9][10]. - The significance of context engineering lies in its ability to address common failures in AI systems, which often stem from inadequate context or incorrect information being provided to the model [12][15]. - Unlike prompt engineering, which focuses on crafting clever prompts, context engineering emphasizes the importance of delivering complete and structured context to enhance model performance [17][19]. Group 2: Components of Effective Context Engineering - Effective context engineering requires accurate information, as models cannot infer context without being explicitly provided with it [12][19]. - The format of the context is critical; how information is communicated to the LLM can significantly impact its responses [13][19]. - Tools must be appropriately utilized to access external information, and the returned data should be formatted in a way that is easily understandable by the LLM [20]. Group 3: Transition from Prompt Engineering to Context Engineering - The transition from prompt engineering to context engineering is driven by the increasing complexity of applications, highlighting the need for a more comprehensive approach to context provision [16][17]. - Context engineering can be viewed as a subset of prompt engineering, where the focus shifts from single input prompts to managing and formatting dynamic data sets [17][18].
PromptPilot发布: AI“嘴替”帮你优化每个指令
Cai Fu Zai Xian· 2025-06-16 10:42
Core Insights - The article discusses the launch of PromptPilot, an intelligent solution platform designed for large models, which aims to transform vague user ideas into precise AI instructions, ensuring high-quality output from models [1][2]. Group 1: Product Features - PromptPilot automates the entire lifecycle of prompt generation, debugging, optimization, and iteration, freeing users from tedious tasks [3]. - The platform acts as a "demand translator," helping users clarify their needs through interactive guidance [3]. - It simplifies the process of defining ideal answers by allowing users to select from diverse generated responses, facilitating quick understanding of user intent [3][4]. - PromptPilot incorporates a closed-loop optimization system that turns "Bad Cases" into data assets for continuous improvement [3][4]. Group 2: Advanced Capabilities - The platform simulates human-like reflection and error summarization, enabling automatic iterative optimization to find the "golden question" for stable results [4]. - It supports multi-turn dialogue optimization, allowing for real-time feedback and enhancement in complex conversational scenarios [5]. - PromptPilot can optimize prompts for multi-modal scenarios, breaking down tasks into multiple steps and searching for optimal solutions [5]. - It enhances function call scenarios by optimizing both the triggering instructions and the descriptions of tools needed during task execution [5]. Group 3: User Accessibility - Users can easily integrate PromptPilot through an SDK, enabling automatic monitoring of "Bad Cases" and initiating a new round of prompt optimization [6]. - The platform standardizes the prompt engineering process, making it accessible for businesses and developers to focus on innovation in AI applications [6][7].
多智能体在「燃烧」Token!Anthropic公开发现的一切
机器之心· 2025-06-14 04:12
Core Insights - Anthropic's new research on multi-agent systems highlights the advantages of using multiple AI agents for complex research tasks, emphasizing their ability to adapt and explore dynamically [2][3][6][7]. Multi-Agent System Advantages - Multi-agent systems excel in research tasks that require flexibility and the ability to adjust methods based on ongoing discoveries, as they can operate independently and explore various aspects of a problem simultaneously [7][8]. - Anthropic's internal evaluations show that their multi-agent system outperforms single-agent systems by 90.2% in breadth-first query tasks [8]. - The architecture allows for efficient token consumption, with multi-agent systems demonstrating a significant performance boost compared to single-agent models [9][10]. System Architecture - The multi-agent architecture follows a "coordinator-worker" model, where a lead agent coordinates tasks among several specialized sub-agents [14][18]. - The lead agent analyzes user queries, creates sub-agents, and oversees their independent exploration of different aspects of the query [19][21]. Performance Evaluation - Traditional evaluation methods are inadequate for multi-agent systems due to their non-linear and varied paths to achieving results; flexible evaluation methods are necessary [44][45]. - Anthropic employs a "LLM-as-judge" approach for evaluating outputs, which enhances scalability and practicality in assessing the performance of multi-agent systems [49][53]. Engineering Challenges - The complexity of maintaining state in intelligent agent systems poses significant engineering challenges, as minor changes can lead to substantial behavioral shifts [56][61]. - Anthropic has implemented robust debugging and tracking mechanisms to diagnose and address failures in real-time [57]. Conclusion - Despite the challenges, multi-agent systems have shown immense potential in open-ended research tasks, provided they are designed with careful engineering, thorough testing, and a deep understanding of current AI capabilities [61].
DeepSeek与ChatGPT:免费与付费背后的选择逻辑
Sou Hu Cai Jing· 2025-06-04 06:29
Core Insights - The emergence of DeepSeek, a domestic open-source AI model, has sparked discussions due to its free advantages, yet many still prefer to pay for ChatGPT, raising questions about user preferences and the quality of AI outputs [1][60]. - The output quality of AI tools is significantly influenced by user interaction, with 70% of the output quality depending on how users design their prompts [4][25]. Technical Differences - DeepSeek utilizes a mixed expert model with a training cost of $5.5 million, making it a cost-effective alternative compared to ChatGPT, which has training costs in the hundreds of millions [2]. - In the Chatbot Arena test, DeepSeek ranked third, demonstrating competitive performance, particularly excelling in mathematical reasoning with a 97.3% accuracy rate in the MATH-500 test [2]. Performance in Specific Scenarios - DeepSeek has shown superior performance in detailed analyses and creative writing tasks, providing comprehensive insights and deep thinking capabilities [3][17]. - The model's reasoning process is more transparent but requires structured prompts for optimal use, indicating that user guidance is crucial for maximizing its potential [7][12]. Cost and Efficiency - DeepSeek's pricing is 30% lower than ChatGPT, with a processing efficiency that is 20% higher and energy consumption reduced by 25% [8][9]. - For enterprises, private deployment of DeepSeek can be cost-effective in the long run, with a one-time server investment of around $200,000, avoiding ongoing API fees [9][10]. Deployment Flexibility - DeepSeek offers flexibility in deployment, allowing individual developers to run the 7B model on standard hardware, while enterprise setups can support high concurrency [11][10]. - The model's ability to run on lightweight devices significantly lowers the barrier for AI application [11]. Advanced Prompting Techniques - Mastery of advanced prompting techniques, such as "prompt chaining" and "reverse thinking," can significantly enhance the effectiveness of DeepSeek [13][14]. - The model's performance can be optimized by using multi-role prompts, allowing it to balance professionalism and readability [15][16]. Language Processing Capabilities - DeepSeek demonstrates a 92.7% accuracy rate in Chinese semantic understanding, surpassing ChatGPT's 89.3%, and supports classical literature analysis and dialect recognition [17]. Industry Applications - In finance, DeepSeek has improved investment decision efficiency by 40% for a securities company [18]. - In the medical field, it has achieved an 85% accuracy rate in disease diagnosis, nearing the level of professional doctors [19]. - For programming assistance, DeepSeek's error rate is 23% lower than GPT-4.5, with a 40% faster response time [20]. Complementary Nature of AI Tools - DeepSeek and ChatGPT are not mutually exclusive but serve as complementary tools, each suited for different tasks based on user needs [21][22]. - DeepSeek is preferable for deep reasoning, specialized knowledge, and data privacy, while ChatGPT excels in multi-modal interaction and creative content generation [24][22]. Importance of Prompting Skills - The ability to design effective prompts is becoming a core competency in the AI era, influencing the quality of AI outputs [54][55]. - The book "DeepSeek Application Advanced Tutorial" aims to enhance users' prompting skills and unlock the model's full potential [61].