Workflow
Scaling Law
icon
Search documents
史上规模最庞大、最多元的真实世界操作数据集!具身领域的Scaling Law来了~
具身智能之心· 2025-11-09 14:08
Core Insights - The article discusses the introduction of GEN-0, a new type of embodied foundational model designed for multimodal training based on high-fidelity physical interactions, which aims to enhance robotic intelligence through real-world data [5][9]. Group 1: Model Characteristics - GEN-0 has been developed to capture human-level reflexes and physical common sense, featuring a core characteristic called "harmonic reasoning" that allows seamless training of thinking and action [5]. - The model has surpassed the critical threshold of 7 billion parameters, showing a phase transition where smaller models become stagnant while larger models continue to improve [6][11]. - GEN-0 demonstrates a strong scaling law, indicating that increased pre-training data and computational power predictably enhance the model's performance across multiple tasks [6][11]. Group 2: Data Utilization - The model is pre-trained on over 270,000 hours of real-world heterogeneous manipulation data, with the dataset expanding at a rate of over 10,000 hours per week [22]. - The data collection comes from diverse operational scenarios across thousands of households, warehouses, and workplaces, aiming to cover all conceivable operational tasks [24]. Group 3: Implications for Robotics - GEN-0 signifies a new era in embodied foundational models, where capabilities will grow predictably with real physical interaction data rather than relying solely on text, images, or simulated data [9]. - The findings highlight that smaller models struggle to process complex sensory-motor data during pre-training, while models with over 70 billion parameters can internalize large-scale pre-training data and quickly adapt to downstream tasks with minimal fine-tuning [15][11].
BigBang-Proton: 自回归基座模型统一语言、科学和物质世界
3 6 Ke· 2025-11-06 10:58
Core Insights - The article discusses the advancements made by the company 超越对称 (Super Symmetry) with their new model BigBang-Proton, which integrates various scientific disciplines and challenges existing AGI approaches [1][2][4]. Group 1: BigBang-Proton Model Innovations - BigBang-Proton successfully unifies multiple scientific problems across different scales, from micro-particles to macro-earth systems, using a next-word prediction paradigm [1]. - The model introduces three fundamental innovations: Binary Patch Encoding, a theory-experiment learning paradigm, and Monte Carlo Attention, which enhances its ability to handle complex scientific tasks [9][12][16]. - The model's pre-training is designed to extend to the entire universe, proposing a concept called "Universe Compression" to consolidate vast amounts of information into a single foundation [5]. Group 2: Performance and Comparisons - BigBang-Proton demonstrates superior performance in arithmetic operations, achieving 100% accuracy in 50-digit addition, significantly outperforming other models like DeepSeek-R1 and ChatGPT-o1 [31][36]. - In particle jet classification tasks, BigBang-Proton achieved an accuracy of 51.29%, competing closely with specialized models, while mainstream LLMs performed poorly [42][44]. - The model also excels in predicting water quality and genomic sequences, achieving competitive results against state-of-the-art models in these domains [59][62]. Group 3: Theoretical and Practical Implications - The introduction of Binary Patch Encoding addresses the limitations of traditional tokenizers, allowing for better numerical analysis and integration of scientific data [11][13]. - The theory-experiment learning paradigm bridges the gap between theoretical knowledge and experimental data, enhancing the model's applicability in real-world scientific research [12][15]. - The advancements made by BigBang-Proton could significantly impact fields reliant on numerical calculations, such as science, engineering, and finance, by resolving long-standing issues related to arithmetic logic [37].
具身智能一步踏入Scaling Law!10B+基础模型,27万小时真实数据
机器之心· 2025-11-05 06:30
Core Viewpoint - The article discusses the breakthrough achieved by the AI robotics startup Generalist with the introduction of a new embodied foundational model, GEN-0, which is designed for multimodal training on high-fidelity physical interaction data, aiming to enhance robotic intelligence through scalable data and computational power [2][5]. Group 1: GEN-0 Model Features - GEN-0 is built to capture human-level reflexes and physical common sense, with a parameter count exceeding 10 billion [3][4]. - A core feature of GEN-0 is "Harmonic Reasoning," allowing the model to seamlessly think and act simultaneously, which is crucial for real-world physical systems [5]. - The model has demonstrated strong scaling laws, indicating that increased pre-training data and computational power can predictably enhance performance across various tasks [6][10]. Group 2: Data and Training Insights - Generalist has pre-trained GEN-0 on over 270,000 hours of diverse real-world operational data, with the dataset growing at a rate of 10,000 hours per week [23][24]. - The company emphasizes that the quality and diversity of data are more critical than sheer quantity, leading to models with different characteristics based on the data mix used [33]. - The scaling experiments revealed that smaller models exhibit "ossification," while larger models continue to improve, highlighting the importance of model size in absorbing complex sensory-motor data [10][11]. Group 3: Applications and Future Directions - GEN-0 has been successfully tested on various robotic platforms, including humanoid robots with different degrees of freedom [6]. - The company is building the largest and most diverse real-world operational dataset to expand GEN-0's capabilities, covering a wide range of tasks across different environments [28]. - Generalist aims to create a robust infrastructure to support the extensive data collection and processing required for training large-scale robotic models [31].
视觉生成的另一条路:Infinity 自回归架构的原理与实践
AI前线· 2025-10-31 05:42
Core Insights - The article discusses the significant advancements in visual autoregressive models, particularly highlighting the potential of these models in the context of AI-generated content (AIGC) and their competitive edge against diffusion models [2][4][11]. Group 1: Visual Autoregressive Models - Visual autoregressive models (VAR) utilize a "coarse-to-fine" approach, starting with low-resolution images and progressively refining them to high-resolution outputs, which aligns more closely with human visual perception [12][18]. - The VAR model architecture includes an improved VQ-VAE that employs a hierarchical structure, allowing for efficient encoding and reconstruction of images while minimizing token usage [15][30]. - VAR has demonstrated superior image generation quality compared to existing models like DiT, showcasing a robust scaling curve that indicates performance improvements with increased model size and computational resources [18][49]. Group 2: Comparison with Diffusion Models - Diffusion models operate by adding Gaussian noise to images and then training a network to reverse this process, maintaining the original resolution throughout [21][25]. - The key advantages of VAR over diffusion models include higher training parallelism and a more intuitive process that mimics human visual cognition, although diffusion models can correct errors through iterative refinement [27][29]. - VAR's approach allows for faster inference times, with the Infinity model achieving significant speed improvements over comparable diffusion models [46][49]. Group 3: Innovations in Tokenization and Error Correction - The Infinity framework introduces a novel "bitwise tokenizer" that enhances reconstruction quality while allowing for a larger vocabulary size, thus improving detail and instruction adherence in generated images [31][41]. - A self-correction mechanism is integrated into the training process, enabling the model to learn from previous errors and significantly reducing cumulative error during inference [35][40]. - The findings indicate that larger models benefit from larger vocabularies, reinforcing the reliability of scaling laws in model performance [41][49].
SemiAnalysis 创始人解析万亿美元 AI 竞争:算力是 AI 世界的货币,Nvidia 是“中央银行”
海外独角兽· 2025-10-22 12:04
Core Insights - The article discusses the intertwining of computing power, capital, and energy in the new global infrastructure driven by AI, emphasizing that AI is not just an algorithmic revolution but a migration of industries influenced by computing power, funding, and geopolitical factors [2] - It highlights the emergence of a "Triangle Deal" among OpenAI, Oracle, and Nvidia, where OpenAI purchases cloud services from Oracle, which in turn buys GPUs from Nvidia, creating a closed-loop system of capital flow [4][5] - The article also points out that controlling data, interfaces, and switching costs is crucial for gaining market power in the AI industry [9] AI Power Struggle - The "Triangle Deal" involves OpenAI purchasing $300 billion worth of cloud services from Oracle over five years, with Nvidia benefiting significantly from GPU sales [4] - Nvidia's investment of up to $100 billion in OpenAI for building AI data centers illustrates the scale of capital required for AI infrastructure [5] - The competition in the AI industry is fundamentally about who controls the data and interfaces, as seen in the dynamics between OpenAI and Microsoft [9] Neo Clouds and Business Models - Neo Clouds represent a new business layer in the AI industry, providing computing power leasing and model hosting services [10] - There are two models for Neo Clouds: short-term contracts with high profit margins but high price risk, and long-term contracts that ensure stable cash flow but depend heavily on counterparty credit [11] - Inference Providers are emerging as key players, offering model hosting and efficient inference services, but they face high uncertainty due to their client base of smaller companies [12][13] AI Arms Race - The article discusses the strategic importance of AI in global power dynamics, particularly for the U.S. to maintain its global dominance [14] - In contrast, China is pursuing a long-term strategy to build a self-sufficient supply chain in semiconductors and AI, with significant government investment [15] Scaling Laws and Technical Challenges - Dylan Patel argues that Scaling Laws will not exhibit diminishing returns, suggesting that increasing computational resources will continue to enhance model performance [16] - The balance between model size and usability is a critical challenge, as larger models can lead to higher inference costs and lower user experience [17] - The need for efficient reasoning and memory systems in AI models is emphasized, with a focus on extending reasoning time to improve performance [22] AI Factory Concept - The AI Factory concept positions AI as an industrial output, where tokens represent the product of computational power and efficiency [28][30] - Companies must optimize token production under constraints of power consumption and model efficiency to remain competitive [30] Talent and Energy Dynamics - The scarcity of skilled individuals who can effectively utilize GPUs is highlighted as a significant challenge in the AI industry [31] - The energy consumption of AI data centers is growing, with projections indicating that AI data centers will consume approximately 624-833 billion kWh by 2025 [32][35] - The U.S. faces challenges in expanding its power generation capacity to meet the rising energy demands of AI infrastructure [36][37] Software Industry Transformation - The traditional SaaS business model is under threat as AI reduces software development costs, leading to a shift towards in-house development [38][39] - Companies with established ecosystems, like Google, may maintain advantages in the evolving landscape, while pure software firms face increasing challenges [40] Company Evaluations - OpenAI is recognized as a top-tier company, while Anthropic is viewed favorably due to its focused approach and rapid revenue growth [41] - Nvidia is seen as a dominant player in the semiconductor space, with significant influence over the AI infrastructure landscape [25] - Meta is highlighted for its potential to revolutionize human-computer interaction through its integrated hardware and software capabilities [42]
《大模型的第一性思考》李建忠对话GPT5与Transformer发明者Lukasz Kaiser实录
3 6 Ke· 2025-10-13 10:46
Core Insights - The rapid development of large intelligent systems is reshaping industry dynamics, exemplified by OpenAI's recent release of Sora 2, which showcases advancements in model capabilities and the complexity of AI evolution [1][2] - The dialogue between industry leaders, including CSDN's Li Jianzhong and OpenAI's Lukasz Kaiser, focuses on foundational thoughts regarding large models and their implications for future AI development [2][5] Group 1: Language and Intelligence - Language plays a crucial role in AI, with some experts arguing that relying solely on language models for AGI is misguided, as language is a low-bandwidth representation of the physical world [6][9] - Kaiser emphasizes the importance of temporal dimensions in language, suggesting that the ability to generate sequences over time is vital for expressing intelligence [7][9] - The conversation highlights that while language models can form abstract concepts, they may not fully align with human concepts, particularly regarding physical experiences [11][12] Group 2: Multimodal Models and World Understanding - The industry trend is towards unified models that can handle multiple modalities, but current models like GPT-4 already demonstrate significant multimodal capabilities [12][13] - Kaiser acknowledges that while modern language models can process multimodal tasks, the integration of different modalities remains a challenge [13][15] - The discussion raises skepticism about whether AI can fully understand the physical world through observation alone, suggesting that language models may serve as effective world models in certain contexts [14][15] Group 3: AI Programming and Future Perspectives - AI programming is emerging as a key application of large language models, with two main perspectives on its future: one advocating for natural language as the primary programming interface and the other emphasizing the continued need for traditional programming languages [17][18] - Kaiser believes that language models will increasingly cover programming tasks, but a solid understanding of programming concepts will remain essential for professional developers [19][20] Group 4: Agent Models and Generalization Challenges - The concept of "agent models" in AI training faces challenges in generalizing to new tasks, raising questions about whether this is due to training methods or inherent limitations [21][22] - Kaiser suggests that the effectiveness of agent systems relies on their ability to learn from interactions with various tools and environments, which is currently limited [22][23] Group 5: Scaling Laws and Computational Limits - The belief in Scaling Laws as the key to stronger AI raises concerns about potential over-reliance on computational power at the expense of algorithmic and architectural advancements [24][25] - Kaiser differentiates between pre-training and reinforcement learning Scaling Laws, indicating that while pre-training has been effective, it may be approaching economic limits [25][26] Group 6: Embodied Intelligence and Data Efficiency - The slow progress in embodied intelligence, particularly in humanoid robots, is attributed to either data scarcity or fundamental differences between bits and atoms [29][30] - Kaiser argues that advancements in data efficiency and the development of multimodal models will be crucial for achieving effective embodied intelligence [30][31] Group 7: Reinforcement Learning and Scientific Discovery - The shift towards reinforcement learning-driven reasoning models presents both opportunities for innovation and challenges related to their effectiveness in generating new scientific insights [32][33] - Kaiser notes that while reinforcement learning offers high data efficiency, it has limitations compared to traditional gradient descent methods [33][34] Group 8: Organizational Collaboration and Future Models - Achieving large-scale collaboration among agents remains a significant challenge, with the need for more parallel processing and effective feedback mechanisms in training [35][36] - Kaiser emphasizes the necessity for next-generation reasoning models that can operate in a more parallel and efficient manner to facilitate organizational collaboration [36][37] Group 9: Memory Mechanisms in AI - Current AI models' memory capabilities are limited by context windows, resembling working memory rather than true long-term memory [37][38] - Kaiser suggests that future architectures may need to incorporate more sophisticated memory mechanisms to achieve genuine long-term memory capabilities [38][39] Group 10: Continuous Learning in AI - The potential for AI models to support continuous learning is being explored, with current models utilizing context as a form of ongoing memory [39][40] - Kaiser believes that while context learning is a step forward, more elegant solutions for continuous learning will be necessary in the future [40][41]
“推理模型还处于RNN的阶段”——李建忠对话GPT-5与Transformer发明者Lukasz Kaiser实录
AI科技大本营· 2025-10-10 09:52
Core Insights - The dialogue emphasizes the evolution of AI, particularly the transition from language models to reasoning models, highlighting the need for a new level of innovation akin to the Transformer architecture [1][2][4]. Group 1: Language and Intelligence - Language plays a crucial role in AI development, with the emergence of large language models marking a significant leap in AI intelligence [6][8]. - The understanding of language as a time-dependent sequence is essential for expressing intelligence, as it allows for continuous generation and processing of information [7][9]. - Current models exhibit the ability to form abstract concepts, similar to human learning processes, despite criticisms of lacking true understanding [9][10]. Group 2: Multimodal and World Models - The pursuit of unified models for different modalities is ongoing, with current models like GPT-4 already demonstrating multimodal capabilities [12][13]. - There is skepticism regarding the sufficiency of language models alone for achieving AGI, with some experts advocating for world models that learn physical world rules through observation [14][15]. - Improvements in model architecture and data quality are necessary to bridge the gap between language and world models [15][16]. Group 3: AI Programming - AI programming is seen as a significant application of language models, with potential shifts towards natural language-based programming [17][19]. - Two main perspectives on the future of AI programming exist: one advocating for AI-native programming and the other for AI as a copilot, suggesting a hybrid approach [18][20]. Group 4: Agent Models and Generalization - The concept of agent models is discussed, with challenges in generalization to new tasks being a key concern [21][22]. - The effectiveness of agent systems relies on the ability to learn from interactions and utilize external tools, which is currently limited [22][23]. Group 5: Scaling Laws and Computational Limits - The scaling laws in AI development are debated, with concerns about over-reliance on computational power potentially overshadowing algorithmic advancements [24][25]. - The economic limits of scaling models are acknowledged, suggesting a need for new architectures beyond the current paradigms [25][28]. Group 6: Embodied Intelligence - The slow progress in embodied intelligence, particularly in robotics, is attributed to data scarcity and fundamental differences between bits and atoms [29][30]. - Future models capable of understanding and acting in the physical world are anticipated, requiring advancements in multimodal training [30][31]. Group 7: Reinforcement Learning - The shift towards reinforcement learning-driven reasoning models is highlighted, with potential for significant scientific discoveries [32][33]. - The current limitations of RL training methods are acknowledged, emphasizing the need for further exploration and improvement [34]. Group 8: AI Organization and Collaboration - The development of next-generation reasoning models is seen as essential for achieving large-scale agent collaboration [35][36]. - The need for more parallel processing and effective feedback mechanisms in agent systems is emphasized to enhance collaborative capabilities [36][37]. Group 9: Memory and Learning - The limitations of current models' memory capabilities are discussed, with a focus on the need for more sophisticated memory mechanisms [37][38]. - Continuous learning is identified as a critical area for future development, with ongoing efforts to integrate memory tools into models [39][40]. Group 10: Future Directions - The potential for next-generation reasoning models to achieve higher data efficiency and generate innovative insights is highlighted [41].
OpenAI奥特曼认错:我天生不适合管理公司
量子位· 2025-10-09 07:03
Core Insights - OpenAI is pursuing three main goals: to become a personal AI subscription service, to build large-scale infrastructure, and to achieve a truly useful AGI (Artificial General Intelligence) [2][4][29] - The recent launch of Sora 2 and various investment collaborations, including partnerships with AMD and Nvidia, indicate a strategic shift towards aggressive infrastructure investment [1][29] Group 1: OpenAI's Strategic Goals - OpenAI aims to become a personal AI subscription service, necessitating the construction of vast infrastructure to support this vision [4][29] - The ultimate mission is to create AGI that is genuinely beneficial to humanity, which requires a multifaceted approach beyond traditional business models [4][8] - OpenAI's infrastructure is currently intended for internal use, with future possibilities for external applications remaining uncertain [5][29] Group 2: Sora's Role in AGI Development - Despite skepticism about Sora's relevance to AGI, OpenAI's CEO believes that developing a "truly outstanding world model" through Sora will be crucial for AGI [10][11] - The resources allocated to Sora are relatively small compared to OpenAI's overall computational capacity, emphasizing a balanced approach to innovation and research [13][29] - Sora is seen as a way to engage society with upcoming technological advancements, particularly in video models, which resonate more emotionally than text [16][29] Group 3: Future Interactions and AI Capabilities - OpenAI envisions future interaction interfaces that go beyond basic chat, incorporating real-time video rendering and context-aware hardware [19][21] - The concept of the Turing Test is evolving, with the new benchmark being AI's ability to conduct scientific research, which OpenAI anticipates will happen within two years [21][22] - OpenAI's confidence in its research roadmap and the economic value it can generate has led to a commitment to aggressive infrastructure investments [29][31] Group 4: Leadership and Management Philosophy - OpenAI's CEO acknowledges a preference for an investor role over management, citing challenges in handling organizational dynamics and operational details [41][42] - The transition from an investor to a CEO role has been described as both challenging and rewarding, providing insights into groundbreaking work in AI [41][43] - The future of AI development is closely tied to energy availability, with a call for more efficient energy solutions to support AI advancements [44]
听说,大家都在梭后训练?最佳指南来了
机器之心· 2025-10-09 02:24
Core Insights - The article emphasizes the shift in focus from pre-training to post-training in large language models (LLMs), highlighting the diminishing returns of scaling laws as model sizes reach hundreds of billions of parameters [2][3][11]. Group 1: Importance of Post-Training - Post-training is recognized as a crucial phase for enhancing the reasoning capabilities of models like OpenAI's series, DeepSeek R1, and Google Gemini, marking it as a necessary step towards advanced intelligence [3][11]. - The article introduces various innovative post-training methods such as Reinforcement Learning from Human Feedback (RLHF), Reinforcement Learning from AI Feedback (RLAIF), and Reinforcement Learning with Verifiable Rewards (RLVR) [2][3][12]. Group 2: Transition from Pre-Training to Post-Training - The evolution from pre-training to instruction fine-tuning is discussed, where foundational models are trained on large datasets to predict the next token, but often lack practical utility in real-world applications [7][8]. - Post-training aims to align model behavior with user expectations, focusing on quality over quantity in the datasets used, which are typically smaller but more refined compared to pre-training datasets [11][24]. Group 3: Supervised Fine-Tuning (SFT) - Supervised Fine-Tuning (SFT) is described as a process that transforms a pre-trained model into one that can follow user instructions effectively, relying on high-quality instruction-answer pairs [21][24]. - The quality of the SFT dataset is critical, as even a small number of low-quality samples can negatively impact the model's performance [25][26]. Group 4: Reinforcement Learning Techniques - Reinforcement Learning (RL) is highlighted as a complex yet effective method for model fine-tuning, with various reward mechanisms such as RLHF, RLAIF, and RLVR being employed to enhance model performance [39][41]. - The article outlines the importance of reward models in RLHF, which are trained using human preference data to guide model outputs [44][46]. Group 5: Evaluation of Post-Training Models - The evaluation of post-training models is multifaceted, requiring a combination of automated and human assessments to capture various quality aspects [57][58]. - Automated evaluations are cost-effective and quick, while human evaluations provide a more subjective quality measure, especially for nuanced tasks [59][60].
“大就是好”,但技术男阿里云并不执著“上头条”
Guan Cha Zhe Wang· 2025-09-29 09:46
Core Viewpoint - Alibaba's CEO, Wu Yongming, delivered a notable presentation at the Yunqi Conference, which led to a significant 9.16% increase in Alibaba's stock price, indicating strong investor sentiment despite a generally cautious market environment [1][3]. Group 1: Company Developments - Wu Yongming highlighted that large models will dominate software as the next-generation operating system, and Alibaba Cloud plans to invest further in AI infrastructure beyond its existing 380 billion yuan commitment over three years [3]. - Alibaba Cloud's Qwen3-Max model has achieved significant advancements, including an increase in pre-training data from 18 terabytes to 36 terabytes, and a focus on scaling laws to enhance model performance [6][10]. - The company has positioned itself as a leader in the AI cloud market, with a reported 35.8% market share, significantly ahead of competitors [16][22]. Group 2: Competitive Landscape - The competition in the AI cloud sector is intensifying, particularly with ByteDance's Volcano Engine, which has captured a 49.2% market share in model-as-a-service (MaaS) [16][18]. - Despite the competitive pressure, Alibaba Cloud has maintained a strong position, with over 53% of Fortune 500 companies using its services for generative AI [16][22]. - The market dynamics are shifting, with a trend towards self-deployment of models on Alibaba Cloud rather than relying solely on API calls, which may not be fully reflected in market share statistics [16][22]. Group 3: Technological Innovations - Alibaba Cloud has made significant strides in AI infrastructure, including the development of a new AI chip that approaches NVIDIA's capabilities and a high-performance network architecture that supports large-scale GPU interconnectivity [25][27]. - The company is focusing on a comprehensive stack for AI infrastructure, which positions it well in the context of increasing domestic demand for AI capabilities [27]. - Innovations in model architecture, such as the introduction of the Qwen3-Next model with a sparse MoE architecture, demonstrate Alibaba's commitment to advancing AI technology [6][10].