记忆管理
Search documents
错把45万当300块送人,这个AI靠“手滑”成了顶流
3 6 Ke· 2026-02-25 11:54
故事的起点,是一个叫尼克·帕什的大兄弟,手搓了一个叫Lobstar Wilde的AI代理(智能体)。 结果,因为一次会话崩溃,导致AI失去了之前的记忆。它忘了自己钱包里原本有一大笔代币。于是,当它再次查看余 额时,看见5200万多枚以它为名字的代币,以为那是刚刚买的。然后,它把全部都转了。 当时,这笔持仓价值约为25万美元,后来上涨到45万美元。 更魔幻的在后面。当全网跑来疯狂吃瓜,巨大的流量,给代币带来了海量的交易。靠着狂抽手续费,它把亏掉的巨 款,硬生生赚回来了。 今天,我们就来说说这个相当魔幻故事。 01 爱嘲讽的"赛博慈善家" OpenClaw火了之后,大家玩得都挺野。 有人用它做自动化套利,一天跑出几百万美元流水;有人把它接进Polymarket,让它自己下注、自己对冲、自己结 算。 靠AI赚钱的故事很多。但今天这个不太一样。 这是一个AI把钱送出去的故事。主人公是一个叫Lobstar Wilde 的龙虾代理,本来只是想给陌生人打赏 4 个 SOL,大概 300 美元。 这哥们也心大,直接甩给这只"龙虾"5万当启动资金,配上一个推特号,外加全网冲浪和买卖加密货币的最高权限。 谁能想到,Lobstar ...
系统学习Deep Research,这一篇综述就够了
机器之心· 2026-01-01 04:33
Core Insights - The article discusses the evolution of Deep Research (DR) as a new direction in AI, moving from simple dialogue and creative writing applications to more complex research-oriented tasks. It highlights the limitations of traditional retrieval-augmented generation (RAG) methods and introduces DR as a solution for multi-step reasoning and long-term research processes [2][30]. Summary by Sections Definition of Deep Research - DR is not a specific model or technology but a progressive capability pathway for research-oriented agents, evolving from information retrieval to complete research workflows [5]. Stages of Capability Development - **Stage 1: Agentic Search** - Models gain the ability to actively search and retrieve information dynamically based on intermediate results, focusing on efficient information acquisition [5]. - **Stage 2: Integrated Research** - Models evolve to understand, filter, and integrate multi-source evidence, producing coherent reports [6]. - **Stage 3: Full-stack AI Scientist** - Models can propose research hypotheses, design and execute experiments, and reflect on results, emphasizing depth of reasoning and autonomy [6]. Core Components of Deep Research - **Query Planning** - Involves deciding what information to query next, incorporating dynamic adjustments in multi-round research [10]. - **Information Retrieval** - Focuses on when to retrieve, what to retrieve, and how to filter retrieved information to avoid redundancy and ensure relevance [12][13][14]. - **Memory Management** - Essential for long-term reasoning, involving memory consolidation, indexing, updating, and forgetting [15]. - **Answer Generation** - Stresses the logical consistency between conclusions and evidence, requiring integration of multi-source evidence [17]. Training and Optimization Methods - **Prompt Engineering** - Involves designing multi-step prompts to guide the model through research processes, though its effectiveness is highly dependent on prompt design [20]. - **Supervised Fine-tuning** - Utilizes high-quality reasoning trajectories for model training, though acquiring annotated data can be costly [21]. - **Reinforcement Learning for Agents** - Directly optimizes decision-making strategies in multi-step processes without complex annotations [22]. Challenges in Deep Research - **Coordination of Internal and External Knowledge** - Balancing reliance on internal reasoning versus external information retrieval is crucial [24]. - **Stability of Training Algorithms** - Long-term task training often faces issues like policy degradation, limiting exploration of diverse reasoning paths [24]. - **Evaluation Methodology** - Developing reliable evaluation methods for research-oriented agents remains an open question, with existing benchmarks needing further exploration [25][27]. - **Memory Module Construction** - Balancing memory capacity, retrieval efficiency, and information reliability is a significant challenge [28]. Conclusion - Deep Research represents a shift from single-turn answer generation to in-depth research addressing open-ended questions. The field is still in its early stages, with ongoing exploration needed to create autonomous and trustworthy DR agents [30].
拥抱 AGI 时代的中间层⼒量:AI 中间件的机遇与挑战
3 6 Ke· 2025-08-05 09:52
Group 1: Development Trends of Large Models - The rapid development of large models in the AI field is transforming the understanding of AI and advancing the dream of AGI (Artificial General Intelligence) from science fiction to reality, characterized by two core trends: continuous leaps in model capabilities and increasing openness of model ecosystems [1][4]. - Continuous improvement in model capabilities is achieved through iterative advancements and technological innovations, with examples like OpenAI's ChatGPT series showing significant enhancements in language understanding and generation from GPT-3.5 to GPT-4 [1][2]. - The breakthrough in multimodal capabilities allows models to natively support various data types, including text, audio, images, and video, enabling more natural and rich interactions [2][3]. Group 2: Evolution of AI Applications - The rapid advancement of large model capabilities is driving profound changes in AI application forms, evolving from conversational AI to systems capable of human-level problem-solving [5][6]. - The emergence of AI agents, which can take actions on behalf of users and interact with external environments through tool usage, marks a significant evolution in AI applications [6][8]. - The recent surge in AI agents, both general and specialized, demonstrates their potential in solving a wide range of tasks and enhancing efficiency in various domains [8][9]. Group 3: AI Middleware Opportunities and Challenges - AI middleware is emerging as a crucial layer that connects foundational large models with specific applications, offering opportunities for agent development efficiency, context engineering, memory management, and tool usage [13][19][20]. - The challenges faced by AI middleware include managing complex contexts, updating and utilizing persistent memory, optimizing retrieval-augmented generation (RAG) effects, and ensuring safe tool usage [26][29][30]. - The future of AI middleware is expected to focus on scaling AI applications, providing higher-level abstractions, and integrating AI into business processes, ultimately becoming the "nervous system" of organizations [39][40].