星尘智能
Search documents
从 DeepMind 到投身具身智能,王佳楠:算法最终还是要服务真实世界|万有引力
AI科技大本营· 2026-01-23 10:09
以下文章来源于CSDN ,作者万有引力 CSDN . 成就一亿技术人 对话 | 唐小引 嘉宾 | 王佳 楠 责编 | 梦依丹 出品 | CSDN(ID:CSDNnews) 通往 AGI 的终点,是代码,还是身体? 在王佳楠看来,答案明确指向了——具身智能。 左:王佳楠,右:唐小引 在 2025 全球机器学习技术大会现场 , CSDN &《新程序员》执行总编唐小引 与星尘智能副总 裁、前 DeepMind 研究员王佳楠展开了一次深入对 话。从 AGI 的终极想象,到具身智能的现实瓶颈,从快慢系统的工程逻辑,到通用机器人的时间表与开发者应有的信念,她给 出了一个既冷静、也充 满长期主义色彩的答案。王佳楠在采访中提到的核心观点有: 欢迎 收听音频播客,如有兴趣观看完整视频,可在文末获取 她曾在牛津大学完成学业,加入 DeepMind,从事强化学习与持续学习研究,亲历了 AlphaStar 等标志性项目的诞生,也在国内生成式 AI 尚处早期 阶段时,参与过统一生成框架的探索,走在 AIGC 爆发之前的科研前沿。无论是在"纯算法"的巅峰,还是在生成式模型的起点,她都站在浪潮内部。 2024 年,她加入星尘智能,选择直面 ...
打破机器人“数据饥荒”僵局:锦秋被投企业星尘智能联合清华、MIT等发布CLAP框架|Jinqiu Spotlight
锦秋集· 2026-01-21 15:36
Core Insights - The article discusses the introduction of the Contrastive Latent Action Pretraining (CLAP) framework, which aims to address the data scarcity issue in robot learning by leveraging abundant human behavior videos from platforms like YouTube and Douyin [4][10]. Group 1: CLAP Framework Overview - The CLAP framework aligns the motion space extracted from videos with the action space of robots, effectively avoiding the "visual entanglement" problem commonly faced by existing latent action models [9][11]. - It utilizes a unified Visual-Language-Action (VLA) framework that combines the precision of machine data with the semantic diversity of large-scale unannotated human video demonstrations [14]. Group 2: Training Methodology - The research team developed two VLA modeling paradigms: CLAP-NTP, a self-regressive model excelling in instruction following and object generalization, and CLAP-RF, a strategy based on Rectified Flow aimed at high-frequency, fine-grained control [10][16]. - A knowledge matching (KM) regularization strategy is introduced to mitigate catastrophic forgetting during the fine-tuning process, ensuring that robots retain previously learned skills while acquiring new ones [11][16]. Group 3: Experimental Results - Extensive experiments demonstrate that CLAP significantly outperforms strong baseline methods, enabling effective skill transfer from human videos to robot execution [18]. - Performance comparisons in real-world tasks show that CLAP-NTP and CLAP-RF achieve success rates of 90% and 85% respectively in pick-and-place tasks, indicating superior capabilities [20]. - Robustness evaluations reveal that CLAP-RF maintains a mean success rate of 66.7% under environmental perturbations, showcasing its resilience [21].
星尘智能x清华x MIT发布CLAP框架!让机器人看视频学操作技能
具身智能之心· 2026-01-20 00:33
点击下方 卡片 ,关注" 具身智能 之心 "公众号 编辑丨 具身智能之心 本文只做学术分享,如有侵权,联系删文 >> 点击进入→ 具身智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 近日, 星尘智能与清华、港大、MIT 联合提出基于对比学习的隐空间动作预训练(Contrastive Latent Action Pretraining, CLAP)框架。 这个框架能够将视频中提纯的运动空间与机器人的动作空间进行对齐,也就是说,机器人能够直接从视频中学习技能! 论文地址 :https://arxiv.org/abs/2601.04061 长期以来,机器人学习面临着一个令人头疼的"数据饥荒"难题:互联网上有着数以亿计的人类行为视频,但专门用于训练机器人的数据却寥寥无几。这种数据不对 称现象的根源在于,收集机器人操作数据需要昂贵的硬件设备、专业的操作环境,以及大量的人工标注工作,成本高昂且效率低下。相比之下,人类行为视频数据 虽然丰富,但由于视觉表征与机器人动作空间之间存在巨大的语义鸿沟,传统方法难以有效利用这些资源。 现有的潜在动 ...
让机器人看视频学操作技能,清华等全新发布的CLAP框架做到了
机器之心· 2026-01-19 03:51
Core Insights - The article discusses the introduction of the Contrastive Latent Action Pretraining (CLAP) framework, developed by Tsinghua University in collaboration with Stardust Intelligence, HKU, and MIT, which enables robots to learn skills directly from videos [2][3]. Group 1: Challenges in Robot Learning - The article highlights a long-standing issue in robot learning known as "data scarcity," where there is an abundance of human behavior videos online but a lack of data specifically for training robots [3]. - The root cause of this data asymmetry is the high cost and inefficiency associated with collecting robot operation data, which requires expensive hardware, specialized environments, and extensive manual labeling [3]. - Traditional latent action models face the "visual entanglement" problem, where models learn irrelevant visual noise instead of actual manipulation skills [3]. Group 2: Innovations of the CLAP Framework - The CLAP framework addresses the technical bottleneck of aligning the motion space extracted from videos with the robot's action space, effectively avoiding the visual entanglement issue [3]. - By utilizing contrastive learning, CLAP maps state transitions in videos to a quantifiable, physically executable action codebook [3]. - The framework allows robots to learn skills from vast amounts of video data available on platforms like YouTube and Douyin, significantly expanding the scale of usable training data [4]. Group 3: Training Methodology - The research team trained the CLAP framework using two modeling paradigms: CLAP-NTP, a self-regressive model excelling in instruction following and object generalization, and CLAP-RF, a strategy based on Rectified Flow aimed at high-frequency, precise control [4][10]. - The framework employs a knowledge matching (KM) regularization strategy to mitigate catastrophic forgetting during the fine-tuning process, ensuring that robots retain previously learned skills while acquiring new ones [4][10]. Group 4: Practical Implications - The long-term value of the CLAP framework lies not only in its technical innovation but also in its potential to accelerate the industrialization of robotics by reducing the cost and time required for deploying robots in various sectors such as services and manufacturing [6]. - The unified visual-language-action (VLA) framework allows for the effective integration of the precision of machine data with the semantic diversity of large-scale unannotated human video demonstrations [8]. Group 5: Experimental Results - Extensive experiments demonstrate that CLAP significantly outperforms strong baseline methods, enabling effective skill transfer from human videos to robot execution [12]. - Performance comparisons in real-world tasks show that CLAP-NTP and CLAP-RF achieve higher success rates in various tasks compared to baseline methods, indicating the framework's robustness and effectiveness [14][15].
领益智造拟大宗减持 受让方锁仓半年
Zheng Quan Shi Bao Wang· 2026-01-16 11:27
Core Viewpoint - The company Lingyi iTech (领益智造) is in a critical phase of steady growth and multi-track layout, with plans for a share reduction by its actual controller, which is not expected to directly impact the stock price. Financial Performance - For the first three quarters of 2025, the company's revenue reached 37.59 billion yuan, representing a year-on-year growth of 19.25% - The net profit attributable to shareholders was 1.94 billion yuan, with a year-on-year increase of 37.66% - The company's profitability continues to improve, with both gross margin and net margin steadily increasing [1] Business Segments Server Solutions - Lingyi iTech has developed comprehensive capabilities in cooling solutions for AI servers, including CDU, liquid cooling modules, and various power supply solutions - The recent acquisition of Limin Da enhances the company's "cooling + power" layout in AI servers, with products covering liquid cooling plates and server racks, serving major clients like NVIDIA and Intel [2] Robotics - In the robotics sector, the company possesses core technologies in reducers, drivers, and motion controllers, offering a wide range of processing and development services - Strategic partnerships have been established with leading companies such as Tesla and UBTECH to collaborate on hardware manufacturing, market expansion, and AI model development [3] AI Glasses - Lingyi iTech focuses on the development of core components and technologies for AI glasses and XR wearable devices, collaborating closely with terminal brands to provide essential parts - The AI-enabled AI/AR glasses are expected to see rapid growth, positioning the company to benefit significantly as a core component supplier [3] Foldable Screen Hardware - The company specializes in providing one-stop solutions for foldable screen hardware, supplying key components such as ultra-thin titanium alloy support parts for products like Samsung's Galaxy Z Fold7 [4]
李强总理在广东调研,视察锦秋被投企业星尘智能绳驱AI机器人现场演示|Jinqiu Spotlight
锦秋集· 2026-01-15 10:28
Core Insights - The article highlights the advancements in robotics and AI technology, particularly focusing on the developments by the company Stardust Intelligence, which has created the first mass-produced rope-driven AI robot, the S1, showcasing its capabilities in various applications [4][7]. Group 1: Government Support and Industry Development - Premier Li Qiang emphasized the need to enhance the industrial ecosystem and explore effective business models for new technologies like robotics and drones during his visit to Guangdong [2]. - The demonstration of the S1 robot by Stardust Intelligence received positive feedback from Premier Li, indicating government interest in fostering innovation in the robotics sector [5]. Group 2: Technological Innovations - Stardust Intelligence's S1 robot features a unique rope-driven design that mimics human tendon movement, allowing for high dynamic response, dexterous operation, and safe interaction, making it suitable for complex tasks [7]. - The company has developed an end-to-end visual-language-action model, Lumo-1, which enables the robot to understand commands and make decisions autonomously, enhancing its operational capabilities [7]. Group 3: Market Position and Financial Backing - Stardust Intelligence has secured several thousand orders for its robots across various high-value scenarios, indicating strong market demand and commercial viability [8]. - The company completed a multi-hundred million yuan A++ financing round in November 2025, with continued investment from Jinqiu Fund, reflecting a commitment to supporting innovative AI startups [8].
专访星尘智能来杰:具身智能与人“共创”,如何创造增量价值
Nan Fang Du Shi Bao· 2026-01-13 09:44
2026年,是国家"十五五"规划的开局之年。这一新五年规划系统性勾勒出中国至2030年的发展蓝图:新 旧动能转换、发展范式升级、全球格局重塑,都将在这一时期找到新的交汇点。 理解"十五五"的顶层设计,是预见未来经济脉动与市场机遇的起点。科技自立自强与新质生产力被置于 更关键的位置,科技如何进入产业系统、穿透生产链条,并在不确定的全球环境中形成可持续竞争优 势,成为新周期的重要命题。 星尘智能创始人&CEO来杰 值此开年之际,南都湾财社发起《开局2026:在新周期起点上》专题策划,关注经济发展的风云变幻。 近日,星尘智能创始人&CEO来杰接受了南都湾财社专访,围绕具身智能的行业主战场、面向AI (DFAI)的软硬件一体化架构,以及商业化与产业协同等问题,给出他的思考和判断。 一 机器人进入真实场景,如何与人"共创"增量价值? 南都湾财社:在"十五五"强调科技自立自强与新质生产力的背景下,您判断具身智能行业未来两到三年 的"主战场"是什么? 来杰:我认为,未来两到三年的主战场,不在"单点炫技"的运动控制,也不在"用机器人替人"的零和叙 事。真正要解决的是机器人和人如何共存共创:让机器人进入真实场景,与人协同完成任 ...
做咖啡、卖盲盒,人形机器人“实操派”寻找自己的“秀场”
Nan Fang Du Shi Bao· 2025-12-29 10:29
Core Insights - The article discusses the emergence of "embodied intelligence" companies, particularly focusing on the operational model of the company Zhi Ping Fang, which has opened a robot-operated coffee shop in Beijing, aiming to enhance brand exposure and public engagement with robotics [1][4]. Group 1: Business Model and Operations - Zhi Ping Fang has launched a robot coffee shop named "Zhi Mo Fang" in Beijing's Chaoyang Park, featuring two humanoid robots that prepare coffee for customers [1][4]. - The robots can complete the coffee-making process in approximately 1.5 minutes, but human staff are still required for tasks such as customer interaction and replenishing supplies [4][6]. - The company plans to establish 1,000 "Zhi Mo Fang" locations across China within three years, targeting tourist spots, commercial areas, and cultural venues [4][6]. Group 2: Market Strategy and Consumer Engagement - The initiative is a collaboration between Zhi Ping Fang and the Chaoyang District Cultural and Tourism Bureau, aiming to showcase the practical applications of humanoid robots in public settings [6]. - The business model includes direct sales of robots to operators or revenue-sharing partnerships with businesses [6]. - The company aims to leverage the novelty of humanoid robots to provide emotional value and enhance consumer experience, despite concerns about the sustainability of public interest [7][8]. Group 3: Challenges and Future Outlook - The company acknowledges that public exposure may reveal product issues, but views real user feedback as an opportunity for rapid product iteration and improvement [8]. - Zhi Ping Fang recognizes the long-term nature of the embodied intelligence sector, emphasizing the need for patience and confidence in technology development [8].
AI与机器人盘前速递丨“机器人MART”开启千台级订单批量交付,三星电子计划推出应用处理器!
Mei Ri Jing Ji Xin Wen· 2025-12-26 03:32
Core Insights - The AI and robotics sectors are experiencing significant growth, with notable market movements and product launches indicating a shift towards commercialization and structural recovery in these industries [1][2]. Market Performance - The Huaxia Sci-Tech AI ETF (589010) rose by 0.74%, demonstrating resilience with a "bottom-rebound" trend, recovering early losses and stabilizing above the intraday average line [1]. - The Robot ETF (562500) surged by 2.93%, showing strong technical recovery, with significant gains in constituent stocks, including a 20% limit-up for Haoshi Machinery and over 13% increases for Aifute-U and Tuosida [1]. - The trading volume for the Robot ETF exceeded 1.934 billion yuan, with a turnover rate of 7.43%, indicating active capital flow and a positive market sentiment towards the robotics sector [1]. Key Developments - The launch of the world's first AI-driven retail service store, "Robot MART," marks a significant milestone, with large-scale orders being delivered starting from Christmas, showcasing the product's ability to operate in open commercial environments [1]. - Samsung Electronics plans to introduce an application processor with self-developed GPU by 2027, expanding its AI ecosystem across various devices, including smartphones and humanoid robots [2]. - The "2026 Beijing Yizhuang Humanoid Robot Half Marathon" is set to take place on April 19, 2026, promoting advancements in robot technology through competitive events [2]. Institutional Perspectives - Huolong Securities highlights that the humanoid robot industry is transitioning from concept validation to commercial realization, with significant events and clear production timelines indicating an approaching industry inflection point [2].
北京上海广州,一批机器人在圣诞节这天上岗打工
3 6 Ke· 2025-12-26 01:53
文|富充 编辑|苏建勋 临近年底,一批具身智能公司开始交付产品,"机器人干活"又有了新场景。 12月25日,圣诞节当天,具身智能创业公司"星尘智能"就告诉《智能涌现》,他们开始与合作方"金马游乐"和"乐华娱乐"批量交付。此次交付的机器人, 正在北京朝阳合生汇、上海东方明珠广场、广州花城汇博纳影城,卖起了潮玩盲盒。 在这个名为"智能领养店"的零售车中,机器人独立完成从语音接待、下单收款、抓取盲盒、商品递送等一系列工作。 △北京朝阳合生汇的"智能领养店"前,顾客在体验,视频:采访人提供 据悉,星尘智能与金马游乐推出的零售店"机器人MART",将陆续进入商圈、游乐场、街区、公园等场景。2025年11月,二者共同合作的"机器人 MART"已经在广东中山市时光奇遇游乐园开放,提供爆米花小食和饮品售卖服务。 星尘智能机器人之所以能够切入多样化场景,与他们的技术路线有关。 "绳驱本体",是星尘智能的核心研发方向,其带来的动作灵活性和精细力控,让机器人可以快速拟人地完成抓取、盛装等细致手部操作。此外,因为绳驱 机器人重量更轻,而且关节具有柔性缓冲机制,能在发生意外接触时有效化解碰撞力,从而保障了人机交互的安全。 这种对绳驱机 ...