长期记忆
Search documents
Clawdbot爆红,会抢走谁的饭碗?
创业邦· 2026-02-01 03:44
以下文章来源于AIX财经 ,作者AIX财经团队 AIX财经 . AI新时代,财经新观察。 来源丨AIX财经 (ID:AIXcaijing ) 作者丨陈丹 编辑丨 魏佳 图源丨 Pixabay 几乎没有人预料到,2026年刚开年,科技圈会因为一 款 个人开发的AI助手而引发一场地震。 这款名为 Clawdbot (后改名为OpenClaw)的个人AI代理工具,能运行在用户本地设备上,并通过WhatsApp、Telegram、 Discord等聊天App与之互动。它不仅能聊天,更能"做事":管理邮箱、日程、自动化任务、浏览网页、执行脚本等, 就像是一 个24小时不休息的 "数字管家"。 本周以来,Clawdbot 的关注度指数级增长:GitHub星标曲线在数日内 呈现 垂直 型飙升 ;全球二手市场 上,因部署需求激 增, Mac mini 等小型电脑设备 被集中抢购;技术社区迅速进入测评、复刻、二次开发。投资人Dave Morin评价称,这是自 ChatGPT以来,再次让他产生活在未来的技术体验。 更具戏剧性的是,这款现象级的产品 并非出自科技巨头或资本加持的独角兽企业,而是由有着iOS开发背景的工程师彼得·斯坦 ...
从 Prompt 到 Agent:AI 思维跃迁的核心逻辑
3 6 Ke· 2026-01-19 02:30
在大厂AI训练实战中,Prompt思维与Agent思维的本质差异正在重塑工作方式。本文深度拆 解如何将传统'文学创作式'提示词升级为'工程管理式'Agent架构,揭秘大厂内部构建'数字 员工集群'的实战方法论与避坑指南。 在大厂做 AI 训练这么久,我最大的体感是:Prompt 思维是"文学创作",而 Agent 思维是"工程管理"。 思维跃迁:从"面试官"变成"老班长" 很多人写 Prompt 的心态像面试官:抛出一堆要求,然后双臂交叉,等模型给你一个完美的答案。如果 模型答得不好,就继续加限定词、加语气词,甚至威逼利诱。 但在 Agent 的世界里,你要做的是"SOP(标准作业程序)的制定者"。 大厂干货:在我们内部,好的 Agent 设计往往是"结构化"的。把一个复杂任务拆解成模型闭着眼都能做 对的微小步骤,这比写一个完美的 Prompt 有效得多。 如果你还停留在给 AI 写长达 500 字的华丽提示词,试图用"咒语"撞大运,那么你可能正在掉进"低水平 勤奋"的陷阱。 今天,我把这套从Prompt向Agent转型的思维模型拆开,聊聊大厂里那些不轻易外传的干货。 记住:指令只能解决单点问题,工作流才能解 ...
狂奔AGI,Claude年终封王,自主编码近5小时震惊全网
3 6 Ke· 2025-12-22 02:02
Core Insights - The article highlights the impressive capabilities of Anthropic's programming model, Claude Opus 4.5, which has outperformed competitors like OpenAI's GPT-5.1-Codex-Max in coding tasks [1][3][4]. Group 1: Performance Metrics - Claude Opus 4.5 can autonomously code for up to 5 hours without crashing, showcasing significant advancements in AI coding agents [2]. - The 50% task completion time for Claude Opus 4.5 is approximately 4 hours and 49 minutes, which is the longest reported to date, while GPT-5.1-Codex-Max can complete tasks in 2 hours and 53 minutes [14]. - Despite its longer 50% task completion time, Opus 4.5's 80% task completion time is only 27 minutes, which is lower than GPT-5.1-Codex-Max's 32 minutes, indicating a smoother success rate curve for longer tasks [17][20]. Group 2: Future Projections - By 2026, AI agents are expected to independently complete a full human workday, with capabilities increasing to handle tasks equivalent to several months of human work by 2028 [13]. - The article suggests that the advancements in AI coding agents are accelerating, moving from minute-level tasks to hour-level tasks, indicating a significant leap in capabilities [9][10]. Group 3: Memory Challenges - The article identifies memory as the final barrier to achieving Artificial General Intelligence (AGI), emphasizing that current AI models lack the ability to retain long-term memory effectively [25][30]. - Current AI systems primarily rely on retrieval-based memory, which is insufficient for complex tasks, highlighting the need for a more sophisticated memory system that mimics human memory [33][35]. - The industry anticipates breakthroughs in memory systems within the next year, which could significantly enhance AI's learning capabilities and overall performance [40][41].
「Memory as a Context」是否将重新定义 Transformer 的 「记忆模式」?
机器之心· 2025-12-14 01:30
Group 1 - The article discusses the concept of "Memory as a Context" and its potential to redefine the memory mechanisms of Transformers, addressing the limitations of current LLM memory capabilities [6][8]. - Google's Titans architecture introduces a neural long-term memory module that allows for online learning and optimization during testing, marking a shift from passive data storage to active learning [7][8]. - The Titans framework includes three architectural variants: "Memory as a Context," "Memory as a Gate," and "Memory as a Layer," each representing different approaches to integrating memory capabilities with Transformer models [7][8]. Group 2 - The article highlights the evolution of LLM memory mechanisms from static caches to adaptive test-time learning systems, enabling models to adjust memory strategies dynamically based on task requirements [9][10]. - A review of the past seven years of research on core memory operations—reading, writing, forgetting, and capacity management—reveals the limitations of static caching mechanisms and recent advancements in improving these operations [10]. - The research emphasizes the importance of selective writing, real-time decision-making, and adaptive resource allocation in enhancing the memory capabilities of Transformers [10].
记忆外挂来了!赋能AI开源记忆系统EverMemOS发布
Nan Fang Du Shi Bao· 2025-11-18 10:46
Core Insights - EverMind has launched its flagship product EverMemOS, a world-class long-term memory operating system for AI agents, which has been released as an open-source version on GitHub for developers and AI teams to deploy and test [1] - The cloud service version is expected to be released within the year, providing enhanced technical support, data persistence, and scalability for enterprise users [1] - EverMemOS has surpassed previous works in mainstream long-term memory evaluation sets, becoming the new state-of-the-art (SOTA) [1][4] Group 1: Product Features and Innovations - EverMemOS is designed based on a brain-like architecture, allowing AI to possess continuity over time, addressing the limitations of large language models (LLMs) that often "forget" during long-term tasks [3][4] - The system features a four-layer architecture inspired by human memory mechanisms, including an agent layer for task understanding, a memory layer for long-term memory management, an indexing layer for efficient memory retrieval, and an interface layer for seamless integration with enterprise applications [6][7] - Key innovations include a modular memory framework that allows for dynamic organization and retrieval of memories, ensuring that AI interactions are coherent and personalized based on long-term user understanding [7] Group 2: Performance Metrics - EverMemOS achieved scores of 92.3% and 82% on the LoCoMo and LongMemEval-S long-term memory evaluation sets, respectively, significantly exceeding the previous SOTA levels [4][6] - The system is the first to support both one-on-one conversations and complex multi-party collaborations, marking a significant advancement in memory systems for AI applications [4]
张小珺对话OpenAI姚顺雨:生成新世界的系统
Founder Park· 2025-09-15 05:59
Core Insights - The article discusses the evolution of AI, particularly focusing on the transition to the "second half" of AI development, emphasizing the importance of language and reasoning in creating more generalizable AI systems [4][62]. Group 1: AI Evolution and Language - The concept of AI has evolved from rule-based systems to deep reinforcement learning, and now to language models that can reason and generalize across tasks [41][43]. - Language is highlighted as a fundamental tool for generalization, allowing AI to tackle a variety of tasks by leveraging reasoning capabilities [77][79]. Group 2: Agent Systems - The definition of an "Agent" has expanded to include systems that can interact with their environment and make decisions based on reasoning, rather than just following predefined rules [33][36]. - The development of language agents represents a significant shift, as they can perform tasks in more complex environments, such as coding and internet navigation, which were previously challenging for AI [43][54]. Group 3: Task Design and Reward Mechanisms - The article emphasizes the importance of defining effective tasks and environments for AI training, suggesting that the current bottleneck lies in task design rather than model training [62][64]. - A focus on intrinsic rewards, which are based on outcomes rather than processes, is proposed as a key factor for successful reinforcement learning applications [88][66]. Group 4: Future Directions - The future of AI development is seen as a combination of enhancing agent capabilities through better memory systems and intrinsic rewards, as well as exploring multi-agent systems [88][89]. - The potential for AI to generalize across various tasks is highlighted, with coding and mathematical tasks serving as prime examples of areas where AI can excel [80][82].