Workflow
Scaling Law(尺度定律)
icon
Search documents
中科院博士、智驾领军人物余轶南押注消费级具身智能,一年内让机器狗走入千家万户!
混沌学园· 2026-02-03 11:57
他是中国AI 1.0时代的拓荒者,在深度学习黎明前的黑夜,成为全中国第一个复现AlexNet (深度卷积神经网络模型) 、拿到AI"黄金榔头"的人。 他是 地平线创始团队成员及前智能驾驶总裁 ,却在公司2024年成功上市的高光时刻,转身闯进具身智能的无人区,去验证物理世界的Scaling Law。 他是 Vbot 维他动力 的创始人, 用60人团队 一年量产机器狗,首款 智能 四足机器人 产品发布即引爆,截止1月10日,预订订单量已经达到了6540台,并实现了 以C 端 带 动B端 的销量破局。 他是余轶南博士,Vbot 维他动力创始人兼CEO,他要打造全球领先的消费级机器人产品。 本期「善友探索流」播客,我们邀请到 Vbot 维他动力创始人兼CEO,同时也是混沌学园六期同学——余轶南博士。 在本场深度对谈中,余轶南博士贡献了极具 颠覆性的 三大商业洞见 : 关于生存哲学 :硬科技创业不是赌博,而是基于认知的精准判断。只有"锁死下限",才敢去赌那个"无限的上限"。 关于公司本质: AI公司V.S.科技公司,区别不在技术,而在是否追求Scaling Law(尺度定律)带来的无限增长。 关于组织终局 :他用了一个 ...
银河通用、魔法原子入局!机器人企业即将扎堆上春晚
Nan Fang Du Shi Bao· 2026-01-28 13:30
马年将至,备受瞩目的央视春晚"科技含量"再创新高。继追觅系机器人企业魔法原子之后,1月25日, 成立不足三年的企业银河通用机器人也或将在央视春晚"露脸",官宣成为2026年央视春晚"指定具身大 模型机器人"。 南都湾财社记者向银河通用方面了解在此次春晚上的具体互动方式,截至发稿暂未获回应。 除了银河通用,另一家官宣成为"战略合作伙伴"的魔法原子同样有来头。据了解,魔法原子成立于2024 年初,有着"追觅系"基因,其核心创始团队曾在小米机器人团队任职。目前,魔法原子布局了通用人形 机器人、四足机器人等多条产品线。据公开信息透露,该公司正在按上市时间表推进,意图加速在二级 市场的动作。 春晚迎来新面孔 相比于此前登陆春晚的"熟面孔"宇树科技,此次高调上春晚的几家企业在业内也备受关注。 公开资料显示,银河通用成立于2023年5月,核心团队拥有深厚的"北大系"学术背景。虽然成立时间不 足三年,但其已完成多轮融资。 记者梳理发现,该企业已累计完成四轮融资,投资方阵容豪华,囊括了美团、商汤、宁德时代 (300750)、中金资本以及央视融媒体基金等产业资本与基金。 值得注意的是,在完成去年底的一轮超3亿美元融资后,银河通用 ...
Transformer能否支撑下一代Agent?
Tai Mei Ti A P P· 2025-12-22 07:39
Core Insights - The current Transformer architecture is deemed insufficient for supporting the next generation of AI agents, as highlighted by experts at the Tencent ConTech conference [1][2][11] - There is a growing consensus that the AI industry is transitioning from a "scaling era" focused on data and computational power to a "research era" that emphasizes foundational innovation [11][12] Group 1: Limitations of Current AI Models - Experts, including prominent figures like Fei-Fei Li and Ilya Sutskever, express concerns that existing Transformer models are reaching their limits, particularly in understanding causality and physical reasoning [2][5][11] - The marginal returns of scaling laws are diminishing, indicating that simply increasing model size and data may not yield further advancements in AI capabilities [2][10] - Current models are criticized for their reliance on statistical correlations rather than true understanding, likening them to students who excel in exams through memorization rather than comprehension [4][5] Group 2: Challenges in Long Context Processing - The ability of Transformers to handle long contexts is questioned, with evidence suggesting that performance degrades significantly beyond a certain token limit [6][7] - The architecture's unidirectional information flow restricts its capacity for deep reasoning, which is essential for effective decision-making [6][7] Group 3: Need for New Architectures - The industry is urged to explore new architectural breakthroughs that integrate causal logic and physical understanding, moving beyond the limitations of current models [11][12] - Proposed alternatives include nonlinear RNNs that allow for internal feedback and reasoning, which could enhance AI's ability to learn and adapt [12][13] Group 4: Implications for the AI Industry - A shift away from Transformer-based models could lead to a reevaluation of hardware infrastructure, as current systems are optimized for these architectures [13] - The value of data types may also change, with physical world sensor data and interactive data becoming increasingly important in the new AI landscape [14] - Companies in the tech sector face both challenges and opportunities as they navigate this transition towards more advanced AI frameworks [16]
AI落地的关键堵点,华为用“黑科技”打通了
Guan Cha Zhe Wang· 2025-08-15 04:06
Core Viewpoint - The traditional Scaling Law for AI models is facing significant bottlenecks, particularly in China, where infrastructure investment is lagging behind the US, leading to challenges in AI inference performance and commercial viability [1][4][9]. Group 1: AI Inference Challenges - AI inference has become a critical area, with current demand for inference computing power exceeding that for training, as evidenced by GPT-5's API call volume exceeding 20 billion calls per minute [4][6]. - Chinese enterprises face a "push not moving," "push slow," and "push expensive" dilemma, with domestic models outputting less than 60 tokens per second compared to over 200 tokens per second for foreign models [7][9]. - The increasing complexity of AI applications, such as long text processing and multi-turn dialogues, has intensified the demand for improved inference performance [1][4][6]. Group 2: Huawei's UCM Technology - Huawei has introduced the Unified Cache Manager (UCM), a breakthrough technology designed to enhance AI inference performance by optimizing memory management and overcoming HBM capacity limitations [1][11]. - UCM employs a tiered caching strategy that allows for the efficient storage and retrieval of KV Cache data, significantly reducing inference latency and costs [10][11][18]. - The technology has demonstrated substantial improvements in inference speed, with a reported 125-fold increase in processing speed for specific applications in collaboration with China UnionPay [19][21]. Group 3: Industry Implications and Future Prospects - The introduction of UCM is seen as a pivotal move for the Chinese AI industry, potentially leading to a positive cycle of user growth, increased investment, and rapid technological iteration [18][24]. - Huawei's open-source approach to UCM aims to foster collaboration within the AI ecosystem, allowing various stakeholders to integrate and enhance their frameworks [28]. - The technology is expected to be applicable across various industries, addressing the challenges posed by the increasing volume of data and the need for efficient inference solutions [23][24].
一文了解DeepSeek和OpenAI:企业家为什么需要认知型创新?
混沌学园· 2025-06-10 11:07
Core Viewpoint - The article emphasizes the transformative impact of AI technology on business innovation and the necessity for companies to adapt their strategies to remain competitive in the evolving landscape of AI [1][2]. Group 1: OpenAI's Emergence - OpenAI was founded in 2015 by Elon Musk and Sam Altman with the mission to counteract the monopolistic power of major tech companies in AI, aiming for an open and safe AI for all [9][10][12]. - The introduction of the Transformer architecture by Google in 2017 revolutionized language processing, enabling models to understand context better and significantly improving training speed [13][15]. - OpenAI's belief in the Scaling Law led to unprecedented investments in AI, resulting in the development of groundbreaking language models that exhibit emergent capabilities [17][19]. Group 2: ChatGPT and Human-Machine Interaction - The launch of ChatGPT marked a significant shift in human-machine interaction, allowing users to communicate in natural language rather than through complex commands, thus lowering the barrier to AI usage [22][24]. - ChatGPT's success not only established a user base for future AI applications but also reshaped perceptions of human-AI collaboration, showcasing vast potential for future developments [25]. Group 3: DeepSeek's Strategic Approach - DeepSeek adopted a "Limited Scaling Law" strategy, focusing on maximizing efficiency and performance with limited resources, contrasting with the resource-heavy approaches of larger AI firms [32][34]. - The company achieved high performance at low costs through innovative model architecture and training methods, emphasizing quality data selection and algorithm efficiency [36][38]. - DeepSeek's R1 model, released in January 2025, demonstrated advanced reasoning capabilities without human feedback, marking a significant advancement in AI technology [45][48]. Group 4: Organizational Innovation in AI - DeepSeek's organizational model promotes an AI Lab paradigm that fosters emergent innovation, allowing for open collaboration and resource sharing among researchers [54][56]. - The dynamic team structure and self-organizing management style encourage creativity and rapid iteration, essential for success in the unpredictable field of AI [58][62]. - The company's approach challenges traditional hierarchical models, advocating for a culture that empowers individuals to explore and innovate freely [64][70]. Group 5: Breaking the "Thought Stamp" - DeepSeek's achievements highlight a shift in mindset among Chinese entrepreneurs, demonstrating that original foundational research in AI is possible within China [75][78]. - The article calls for a departure from the belief that Chinese companies should only focus on application and commercialization, urging a commitment to long-term foundational research and innovation [80][82].