Workflow
Continual Learning
icon
Search documents
2026,是个“AI多模态大年”!普通人如何看懂十万亿美金的变局?
混沌学园· 2026-02-02 12:47
以下文章来源于海外独角兽 ,作者拾象投研团队 海外独角兽 . 研究科技大航海时代的伟大公司。 当大模型的"暴力美学"逐渐步入深水区,全球 AI 产业的叙事逻辑正在发生深刻演变。OpenAI、Google、Anthropic 三强割据,谁能率先推开 AGI 的大门?下一代 技术范式 continual learning(持续学习) 将如何颠覆现有格局? 2026年1月10日的课程中,我们特别邀请到了拾象科技创始人兼 CEO 李广密老师讲授《2026AGI洞察与投资趋势》。 作为深度穿梭于硅谷与中国的一线投资人, 他将带我们跳出繁杂的现象,从宏观视角复盘 AI 军备竞赛,预判 2026 年的AI创业胜负手。 李广密老师立足于全球视野,深度复盘了过去三年 AI 浪潮的演进路径,前瞻性地提出了 2026 年 AGI 发展的核心判断。 李广密老师将结合其在硅谷的一手调研 经验,深入剖析顶级模型公司(OpenAI、Google、Anthropic)的战略分化,探讨算力基建(GPU与TPU)的阵营对抗,并揭示在"技术溢出"红利下,创业者如 何捕捉下一个千亿美金级别的"新物种"。 课程不仅涵盖了宏观的投资策略,更通过 Cur ...
How To Play AI Beta:拾象 2026 AGI 投资思考开源
海外独角兽· 2026-02-02 01:14
作者:Guangmi,Penny,Cage,Haina,Feihong,Siqi,Nathan AI 领域的变化速率和格局演化永远比市场想象中更加迅速,几乎每个月市场共识和叙事都在翻 转。 本篇报告是拾象团队围绕这些变化做的一次系统复盘,用来重新校准对当下 AI 竞争时局的判断, 也对 2026 年可能成为主线的一些核心技术和产品趋势进行了拆解。 我们将这份报告开源出来,希望和大家共同探讨 :哪些是结构性机会,哪些只是阶段性的噪音: 1. Google 重回叙事顶峰,但 AI 不是零和博弈, OpenAI 和 Anthropic 的"赢面"仍很大; 2. Continual learning 已经成为几乎所有 AI labs 押注的新范式共识,2026 年会看到新的信号; 3. AGI 竞赛很像自动驾驶,从 L3 到全面实现 L4 难度极大,但在知识类工作这些垂直领域,局部 L3/L4 已经实现了可观的效率提升和经济价值; 4. "NVIDIA + OpenAI" 这条主线在短期内可能被市场低估, 今天继续 bet OpenAI 是在下注 AI 时代 的 "something never seen"; 5. ...
硅谷“钱太多”毁了AI ?!前OpenAI o1负责人炮轰:别吹谷歌,Q-Star 被炒成肥皂剧,7年高压被“逼疯”!
Xin Lang Cai Jing· 2026-01-25 01:24
来源丨AI前线 编译 | Tina 这不是离职八卦,而是在一个把技术做成剧情、把研究变成围观的行业里,扛了七年高压后的选择。 2026 年的第一个月,Jerry Tworek 离开 OpenAI 的消息传出来时,几位 OpenAI 的员工在 X 上几乎失控 地发声:"我真的崩溃了""这太难受了"。大家的反应像是:这事来得太突然,也太重。 Jerry 是现代 AI 浪潮背后最有影响力、却也最少公开露面的关键人物之一。 2019 年加入 OpenAI 时, 当时该公司还只有约 30 名员工。他参与了许多最重要的项目,包括后来被称为 Q-Star 和 Strawberry 的 推理方法,最终发展成为 o1 推理模型。 这次离职后,他在接受 Core Memory 的播客采访时解释了原因:他想从事有风险的基础研究,这种研 究在像 OpenAI 这样的公司已经不可能进行了,因为像用户增长这样的指标才是优先考虑的。他对 ChatGPT 广告的看法体现了研究与商业化之间的脱节:"这是一种商业策略,而我负责训练模型。" 这 番言论印证了有关 OpenAI 人工智能研究与产品开发之间日益加剧的分歧的传言。 在 Tworek 看 ...
硅谷“钱太多”毁了AI ?!前OpenAI o1负责人炮轰:别吹谷歌,Q-Star 被炒成肥皂剧,7年高压被“逼疯”!
AI前线· 2026-01-24 05:33
Core Viewpoint - The departure of Jerry Tworek from OpenAI highlights the growing divide between AI research and commercialization, emphasizing the need for risk-taking in foundational research that is increasingly difficult in a competitive corporate environment [3][4][5]. Group 1: Departure and Industry Insights - Jerry Tworek's exit from OpenAI was met with shock among employees, indicating his significant influence within the company [3][10]. - Tworek criticized the AI industry for a lack of innovation, stating that major companies are developing similar technologies, which pressures researchers to prioritize short-term gains over experimental breakthroughs [4][5]. - He pointed out that Google's success in catching up with OpenAI was due to OpenAI's own missteps, including slow actions and failure to leverage its initial advantages [4][5]. Group 2: Organizational Challenges - Tworek identified organizational rigidity as a barrier to innovation, where team structures limit cross-team research and collaboration [4][22]. - He expressed concern that the current state of the AI industry resembles a soap opera, where personal movements and internal conflicts overshadow genuine research progress [6][7]. Group 3: Future Research Directions - Tworek emphasized the importance of exploring new research paths rather than following the mainstream trajectory, advocating for more diversity in AI model development [30][31]. - He highlighted two underexplored areas: architectural innovation beyond the Transformer model and the integration of continual learning into AI systems [45][47]. - Tworek believes that significant advancements in AI will require a shift away from the current focus on scaling existing models and towards more innovative approaches [26][28]. Group 4: AGI and Industry Evolution - Tworek updated his perspective on the timeline for achieving AGI, acknowledging that while current models are powerful, they still lack essential capabilities like continuous learning and multimodal perception [49][50]. - He noted that the rapid evolution of AI technology and increasing investment in the field could lead to breakthroughs sooner than previously anticipated [51].
拾象 2026 AI Best Ideas:20 大关键预测
海外独角兽· 2026-01-01 05:25
Core Insights - The article presents 20 key predictions for AI trends in 2026, highlighting significant advancements and shifts in the industry [2] Group 1: AI Paradigms and Trends - The emergence of a new paradigm in AI, focusing on continual learning, is expected to gain traction in 2026, with positive signals likely to emerge from at least 1-2 technical pathways [5] - ChatGPT is projected to double its daily active users (DAU) to between 800 million and 1 billion by 2026, establishing itself as a global entry point for users [6] - The "App-store Moment" for ChatGPT is anticipated, leading to the creation of the first application generating $100 million ARR within its ecosystem [7] Group 2: Company Developments and Market Dynamics - OpenAI is expected to reverse its narrative in the second half of 2026, potentially achieving a valuation exceeding $1 trillion due to its strong market position and partnerships [9] - xAI's integration into Tesla is predicted to enhance the synergy between digital and physical worlds, contributing to advancements in AGI [11] - 2026 is forecasted to be a significant year for Enterprise AI, with Anthropic's ARR expected to at least double, reaching over $20 billion [12][14] Group 3: Technological Innovations - The multi-modal AI sector is anticipated to experience a commercial breakthrough, with the emergence of applications akin to Pokémon GO [15][16] - Long-horizon tasks and multi-modal demands are expected to drive the growth of new data companies, each achieving $1 billion ARR [17] - Personalization is projected to become a key competitive advantage for leading AI models, enhancing user engagement [19] Group 4: Market Valuations and IPOs - The AI IPO market is expected to flourish in 2026, with significant companies like SpaceX and OpenAI planning to go public, potentially signaling a peak in market sentiment [32] - Google is predicted to surpass a market valuation of $5 trillion, driven by its strong position in the AI model landscape and advertising business [34] Group 5: Infrastructure and Hardware - Nvidia's aggressive investment in optical interconnect technology is expected to lead to a wave of mergers and acquisitions in the CPO sector [27][28] - The demand for storage solutions is projected to surge due to the multi-modal revolution, integrating storage deeply into computational cores [29] - A significant increase in reasoning power is anticipated, with token consumption expected to grow by at least 10 times in 2026 [30][31]
房间里的大象:Ilya挑明AI的“高分低能”,呼吁要从研究到scale到再重回研究时代|Jinqiu Select
锦秋集· 2025-11-26 07:01
Core Insights - The article discusses the transition from the "scaling era" to a "research era" in AI development, emphasizing the need for innovative paradigms that enhance generalization capabilities and economic properties of models [6][11][59]. Group 1: Model Performance and Limitations - Current AI models exhibit high performance in evaluations but lag in real-world economic impact, indicating a disconnect between evaluation metrics and practical applications [17][18]. - Models can perform impressively in one context but fail in another, often due to overfitting to evaluation criteria rather than generalizing to real-world tasks [19][22]. - The phenomenon of "reward hacking" is highlighted, where researchers design training environments that prioritize evaluation scores over real-world applicability [24][25]. Group 2: The Need for Paradigm Shift - The article argues for a return to a research-focused approach to address fundamental issues of generalization in AI, moving away from merely scaling existing models [6][11][59]. - The scaling dilemma is discussed, where the focus on increasing compute and data may not yield transformative results without innovative research [57][59]. - The importance of understanding the underlying mechanisms of human learning and decision-making is emphasized, suggesting that AI should incorporate similar principles [73][75]. Group 3: Human Learning vs. AI Learning - Human learning is characterized by high sample efficiency and the ability to learn from minimal data, contrasting sharply with current AI models that require extensive data [66][70]. - The article posits that human learning mechanisms, such as continual learning and robust self-correction, are not adequately replicated in AI systems [72][74]. - The discussion includes the role of emotions and value functions in human decision-making, which are often overlooked in AI development [51][53]. Group 4: Future Directions and Research Focus - The article suggests that the future of AI research should focus on developing models that can learn and adapt in real-world environments, rather than just optimizing for specific tasks [97][99]. - The potential for rapid economic growth driven by AI deployment is acknowledged, but the complexities of this growth are also highlighted [100]. - The need for a robust alignment of AI systems with human values and the importance of gradual deployment strategies are emphasized as critical for the safe development of superintelligent AI [103][106].
Ilya两万字最新访谈:人类的情感并非累赘,而是 AI 缺失的“终极算法”
3 6 Ke· 2025-11-26 04:26
Core Insights - The discussion centers on the limitations of current AI models and the new pathways toward superintelligence, emphasizing the disconnect between model performance in evaluations and real-world applications [3][4][20] - Ilya Sutskever highlights the need to transition back to a research-focused paradigm, moving away from mere scaling of models, as the diminishing returns of scaling become evident [3][34] - The concept of a "value function" is introduced as a critical element that enables human-like learning efficiency, which current AI lacks [3][5][6] Group 1: Current AI Limitations - Current AI models perform well in evaluation tests but often make basic errors in practical applications, indicating a lack of true understanding and generalization [4][18][20] - The over-optimization of reinforcement learning (RL) for evaluations has led to models that excel in competitive programming but struggle with real-world problem-solving [4][21] - Sutskever compares AI models to competitive programmers who are skilled in solving specific problems but lack the broader intuition and creativity of more versatile learners [4][22] Group 2: Human Learning Insights - Human learning is characterized by high sample efficiency, allowing individuals to learn complex skills with minimal data, attributed to innate value functions that guide decision-making [5][6][40] - The evolutionary advantages in human learning, particularly in areas like vision and motor skills, suggest that humans possess superior learning algorithms compared to current AI systems [5][38] - The discussion emphasizes the importance of emotional and intuitive feedback in human learning, which AI currently lacks [6][30][31] Group 3: Strategic Directions for SSI - Ilya Sutskever's new company, SSI, aims to explore safe superintelligence, advocating for a gradual release of AI capabilities to raise public awareness about safety [7][52] - The shift from a secretive development approach to a more transparent, gradual release strategy is seen as essential for fostering a collaborative safety environment [7][52] - SSI's focus on research over immediate market competition is intended to prioritize safety and ethical considerations in AI development [52][54] Group 4: Research Paradigm Shift - The transition from an era of scaling (2020-2025) back to a research-focused approach is necessary as the limits of scaling become apparent [34][46] - Sutskever argues that while scaling has been beneficial, it has also led to a homogenization of ideas, necessitating a return to innovative research [34][46] - The need for a more efficient use of computational resources in research is highlighted, suggesting that breakthroughs may come from novel approaches rather than sheer scale [35][46]
X @Avi Chawla
Avi Chawla· 2025-11-12 06:31
Agent Learning & Development - Current agents lack continual learning, hindering their ability to build intuition and expertise through experience [1][2] - A key challenge is enabling agents to learn from interactions and develop heuristics, similar to how humans master skills [1][2] - Composio is developing infrastructure for a shared learning layer, allowing agents to evolve and accumulate skills collectively [3] - This "skill layer" provides agents with an interface to interact with tools and build practical knowledge [4] Industry Trends & Alignment - Anthropic is exploring similar approaches, codifying agent behaviors as reusable skills [4] - The industry is moving towards a design pattern where agents progressively turn experience into composable skills [4] Composio's Solution - Composio's collective AI learning layer enables agents to share knowledge, allowing them to handle API edge cases and develop real intuition [5] - This approach facilitates continual learning, where agents accumulate skills through interaction rather than just memorizing [5]