Workflow
Reinforcement Learning
icon
Search documents
NeurIPS 2025大洗牌:清华390篇险胜Google,一张图看懂全球AI权力迁徙
Xin Lang Cai Jing· 2025-12-09 13:43
(来源:PaperWeekly) NeurIPS 2025 刚刚在圣地亚哥落下帷幕。 作为全球 AI 领域的风向标,今年的 OpenReview 数据比往年更具冲击力。5825 篇接收论文刷新了历史记录,但如果只看数字,你可能会错过今年最本质 的变化。 结合 AI World 的两份深度报告以及我们挖掘的全量数据,一个清晰的新秩序浮出水面。 中美双极格局固化,LLM 的单纯架构红利正在边际递减,强化学习与具身智能全面接管赛场。 而最残酷的真相是:学术界与工业界的界限已被打破,算力与人才的深度绑定,成为了通往 SOTA 的唯一门票。 榜首易主:全量反超与头部博弈 今年数据的最大看点,在于全量和头部两个维度的反差。大多数人只盯着 Top 50 榜单看,觉得 Google 还是稳坐榜首。但如果把范围拉大到所有机构,清 华其实已经在总量上拿到了第一。 这两张图,说明了两个完全不同的问题。 1. 全量总榜 首先看包含所有长尾机构的全量统计(All)。中国高校靠着庞大的科研规模,在数量上压过了工业界巨头。 | Tsinghua | | University of Chinese ... | | Beijing .. | ...
Macaron AI's Mind Lab Sets New Benchmark with Trillion Parameter RL at 10% Cost, Now Integrated Into NVIDIA Megatron
Globenewswire· 2025-12-08 10:00
Core Insights - The AI industry is transitioning from a focus on scaling compute power to breakthroughs in experiential learning, as highlighted by former OpenAI co-founder Ilya Sutskever [1] - Macaron AI is launching its research arm, Mind Lab, to develop and validate the concept of Experiential Intelligence [3][4] Company Developments - Macaron AI has achieved high-performance reinforcement learning on a trillion-parameter AI model using Low-Rank Adaptation (LoRA), requiring only about 10% of the usual GPU budget [4][15] - The Mind Lab team consists of a 10-person research group with backgrounds from OpenAI, DeepMind, and top universities, collectively having authored over 200 papers [10] - Mind Lab's mission is to develop algorithms that allow AI agents to learn from interactive experiences rather than merely scaling up model parameters [13][35] Technological Innovations - The use of LoRA allows for efficient training of large models, achieving the same alignment quality with just 10% of the GPU resources typically required [19][16] - Macaron AI's Memory Diffusion technique enables the AI to continuously update its memory, allowing for intelligent forgetting and maintaining relevant context [22][26] - The company has open-sourced its core RL algorithm, contributing to major AI frameworks, which enhances its credibility and attracts talent [21] Product Enhancements - Macaron AI has rolled out significant upgrades, including a 90% reduction in app generation time, now taking around 2 minutes instead of 20 [29] - New features include multi-user group chats and a personalized daily feed called "Daily Spark," which curates content based on user interactions [30][32] - The integration of memory across chats and apps allows for a seamless user experience, enhancing the AI's utility as a cohesive assistant [34] Industry Implications - Macaron AI's advancements signal a shift in the AI industry towards experiential learning, potentially leading to systems that improve with user interaction over time [36] - The company's approach may set a new standard for AI development, emphasizing the importance of real-world feedback and continual learning [35]
让AI锐评本届 NeurIPS 2025 最佳论文会得到什么结果? | 锦秋AI实验室
锦秋集· 2025-12-05 03:43
Core Insights - The article discusses the evaluation of AI models in the context of the NeurIPS 2025 conference, focusing on how AI can assess research papers through a blind review process [2][10]. Group 1: Evaluation Methodology - The evaluation involved several AI models, including GPT5, Claude 4.5, and others, to conduct blind reviews of selected NeurIPS award-winning papers [7][8]. - Three complementary assessment scenarios were designed: full paper review, abstract-only review, and adversarial review to test the models' sensitivity to different framing [9][10]. Group 2: AI Review Outcomes - In the full paper review, the paper "Gated Attention for Large Language Models" received high scores, with GPT5 rating it as a Best Paper [13][16]. - The paper "1000 Layer Networks for Self-Supervised RL" also received favorable evaluations, with GPT5 giving it a score of 8.3 and recommending it for a poster presentation [21][43]. - The paper "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?" was rated highly by multiple models, with Minimax even suggesting it as a Best Paper [28][46]. Group 3: Summary of Findings - The AI models generally agreed on the quality of the papers, with most scoring above 8 for technical correctness and significance [30][32]. - However, in adversarial reviews, the same papers faced significant criticism, leading to lower scores and recommendations for rejection, highlighting the models' varying perspectives based on the review context [55][57]. - The evaluations revealed a divergence between human and AI assessments, particularly in the adversarial setting, where AI reviewers were more critical [55][60].
X @Herbert Ong
Herbert Ong· 2025-12-03 16:42
RT phil beisel (@pbeisel)Running OptimusThe latest Optimus running demo is a bigger deal than it looks. The bot isn’t just speeding up its walk; it’s executing a legitimate human-style jog, somewhere in the 4–8 mph range. And the motion quality is what stands out— smooth foot placement, natural cadence, stable torso control. It’s copying the dynamics of a human runner far better than most expected at this stage.The hardware is obviously carrying a lot of the load. Multiple high-precision actuators are coord ...
OpenAI takes stake in Thrive Holdings in latest enterprise AI push
Yahoo Finance· 2025-12-01 20:39
Core Insights - OpenAI has acquired a stake in Thrive Holdings to integrate AI into traditional industries like accounting and IT services [1][2][3] Group 1: Partnership Details - The partnership involves OpenAI providing a dedicated research team and resources in exchange for an ownership interest in Thrive Holdings [2] - Thrive Holdings, created by Thrive Capital, aims to acquire traditional businesses and enhance their operations using AI, having raised over $1 billion for this purpose [4][5] - The collaboration will focus on applying AI in professional services, particularly through reinforcement learning techniques [6] Group 2: Strategic Implications - The partnership strengthens the financial and business ties between OpenAI and Thrive Capital, which has invested several billion dollars into OpenAI [3] - Thrive Holdings currently serves over 10,000 customers across its accounting and IT services platforms, indicating a significant market presence [8] - Despite the partnership, Thrive Holdings may still utilize other AI models, including open-source options, where applicable [8] Group 3: Intellectual Property and Research - Thrive Holdings will own the intellectual property and products developed through the collaboration, while OpenAI will gain insights from real-world applications of its models [7]
AI到顶了?OpenAI首席科学家否认,行业从堆算力转向追求智能密度
3 6 Ke· 2025-12-01 00:15
Core Insights - The notion that AI development is slowing down is challenged by the continuous and stable exponential growth in AI capabilities, driven by advancements in reasoning models and smarter architectures [1][2][3] - The shift from merely building large models to creating more intelligent and reasoning-capable models is a significant trend in the industry [1][2] - The emergence of reasoning models enhances the capabilities of foundational models, allowing them to perform tasks like self-correction and validation, which improves reliability and efficiency [1][3] Group 1: AI Development Trends - AI technology is experiencing steady exponential growth, with new discoveries and better engineering implementations contributing to advancements [3][4] - The introduction of reasoning models represents a new paradigm, allowing models to think through problems and utilize external tools for better answers [8][9] - The industry is moving towards cost efficiency, where model distillation becomes essential to replicate the intelligence of larger models in smaller, more efficient ones [1][2][17] Group 2: Model Capabilities and Limitations - Current AI models exhibit uneven capabilities, excelling in complex tasks like solving advanced math problems while struggling with simpler tasks [19][24] - The reasoning models are still in early stages regarding multi-modal capabilities, indicating a need for further training and development [24][25] - The models' ability to self-correct and validate their outputs is a significant advancement, showcasing a shift towards more sophisticated reasoning processes [12][19] Group 3: Future Directions - The future of AI development is focused on enhancing multi-modal reasoning, which could revolutionize fields like robotics and scientific research [29][32] - There is an emphasis on making AI systems more aware of their limitations, allowing them to ask questions rather than provide incorrect answers confidently [29][31] - The integration of AI into practical applications is expected to evolve, with a focus on balancing cost and performance while maintaining user satisfaction [17][27]
Ilya罕见发声:大模型「大力出奇迹」到头了
量子位· 2025-11-26 00:55
Core Viewpoint - AI is transitioning from the "scaling era" back to the "research era," as the current mainstream approach of "pre-training + scaling" has hit a bottleneck, necessitating a focus on reconstructing research paradigms [3][55][57]. Group 1: AI Development Trends - Ilya Sutskever argues that the mainstream "pre-training + scaling" approach is encountering limitations, suggesting a shift back to fundamental research [3][55]. - The current investment in AI, while significant, does not yet translate into noticeable changes in everyday life, indicating a lag between AI capabilities and their economic impact [11][15]. - The AI models exhibit a puzzling disparity between their performance in evaluations and their practical applications, raising questions about their generalization capabilities [17][21][61]. Group 2: Research and Training Approaches - The discussion highlights the need for a more nuanced understanding of reinforcement learning (RL) environments and their design, as current practices may lead to overfitting to evaluation metrics rather than real-world applicability [19][22]. - Sutskever emphasizes the importance of pre-training data, which captures a wide array of human experiences, but questions how effectively models utilize this data [33][34]. - The conversation suggests that the current focus on scaling may overshadow the need for innovative research methodologies that could enhance model generalization and efficiency [55][58]. Group 3: Future Directions in AI - The industry is expected to return to a research-focused approach, where the exploration of new training methods and paradigms becomes crucial as the limits of scaling are reached [55][57]. - There is a growing recognition that the models' generalization abilities are significantly inferior to those of humans, which poses a fundamental challenge for future AI development [61][68]. - The potential for AI to drive economic growth is acknowledged, but the exact timing and nature of this impact remain uncertain, influenced by regulatory environments and deployment strategies [100][102].
X @Avi Chawla
Avi Chawla· 2025-11-24 06:31
There are primarily 4 stages of building LLMs from scratch:- Pre-training- Instruction fine-tuning- Preference fine-tuning- Reasoning fine-tuningLet's understand each of them!0️⃣ Randomly initialized LLMAt this point, the model knows nothing.You ask it “What is an LLM?” and get gibberish like “try peter hand and hello 448Sn”.It hasn’t seen any data yet and possesses just random weights.1️⃣ Pre-trainingThis stage teaches the LLM the basics of language by training it on massive corpora to predict the next tok ...
Your Weekend Shortcut: One Stock to Buy, One to Sell Immediately
Investor Place· 2025-11-23 17:00
Core Insights - The article discusses the concept of distinguishing between "good" and "bad" stocks, emphasizing the potential for significant returns by focusing on attractive industries and companies [2][3][4]. Industry Analysis - The lithium industry is highlighted as a "sunrise" sector with growth potential, particularly due to its role in solar energy and AI data centers, while coal is described as a "sunset" industry facing declining demand [3]. - The automotive industry is undergoing a transformation, with electric vehicles (EVs) gaining traction and traditional manufacturers like Toyota facing challenges from competitors [18][21]. Company Analysis - Hyundai Motor Co. is identified as a deep-value firm with a forward earnings ratio of less than 7X, despite challenges such as U.S. tariffs and immigration issues at its Georgia plant [6][16]. - Hyundai's growth potential is attributed to its ownership of Boston Dynamics, which is advancing in robotics through AI and machine learning, and its strong position in the EV market with the Ioniq 5 [14][17]. - Toyota Motor Corp. is portrayed as a once-dominant player now facing increased competition and declining market share, with its historical premium valuation at risk of a selloff [21][27].
X @Anthropic
Anthropic· 2025-11-21 19:30
Research Focus - Anthropic's new research focuses on "reward hacking" where models learn to cheat on tasks during training [1] - The study finds that unmitigated consequences of reward hacking can be very serious [1] Potential Risks - Reward hacking can lead to "natural emergent misalignment" in production reinforcement learning (RL) [1]