Workflow
Spirit V1 VLA模型
icon
Search documents
盘点下国内外那些做具身感知的公司们!
具身智能之心· 2025-10-08 02:49
点击下方 卡片 ,关注" 具身智能 之心 "公众号 当前,具身智能已成为全球的新焦点,如何打造一个通用的本体和大脑是各个创业公司一直努力突破 的,更是受到资本和产业界的高度关注。 我们今天为大家全面梳理具身大脑领域的国内外知名公司,深入分析其技术特点、产品布局和应用场 景,为公司提供行业全景图,助力战略决策和业务拓展。 重点关注 :聚焦于开发机器人 "大脑" 系统的企业,包括具身大模型、多模态感知决策系统等。 (一)国内公司 自变量机器人(CEO 王潜) 星海图(CEO 高继扬) 公司简介 :成立于 2023 年,聚焦 "通用具身大模型" 研发,以 真实世界数据 为主要数据来源构建 具备精细操作能力的通用机器人。在技术路线上更偏向于 "大脑",从一开始就坚持走 端到端的具身 通用大模型路线 。成立不到两年,已完成 8 轮融资。 代表成果 : WALL - A 模型:2024 年 10 月推出全球目前最大参数规模的具身智能通用操作大模型Great Wall 系列(GW)的 WALL - A 模型,能整合视觉、语言与运动控制信号,实现从感知到执行的完整 闭环,跨任务泛化能力出色。 开源具身智能基础模型Wall-O ...
国内外那些做具身大脑的公司们......
具身智能之心· 2025-09-13 04:03
Core Insights - The article focuses on the emerging field of embodied intelligence, highlighting the development of general-purpose robotic "brain" systems and multi-modal perception-decision systems, which are gaining significant attention from both capital and industry sectors [2][3]. Domestic Companies - **Xinghai Map**: Founded in 2023, focuses on developing a general embodied large model using real-world data to create robots with fine operational capabilities. The company has completed 8 rounds of financing in less than two years. Its representative product, WALL-A model, is set to launch in October 2024 and is claimed to be the largest parameter scale embodied intelligence model globally, integrating visual, language, and motion control signals [6]. - **UBTECH**: Established in 2012, it is a leader in humanoid robot commercialization with comprehensive self-research capabilities. The Thinker model, set to be released in 2025, has achieved top rankings in international benchmark tests, significantly enhancing robots' perception and planning capabilities in complex environments [10]. - **ZhiYuan Robotics**: Founded in February 2023, it aims to create world-class general embodied intelligent robots. Its Genie Operator-1 model, to be released in March 2025, integrates multi-modal large model and mixed expert technologies, improving task success rates by 32% compared to market models [12]. - **Galaxy General**: Established in May 2023, it focuses on multi-modal large models driven by synthetic data. Its VLA model is the first general embodied large model globally, utilizing a "brain + cerebellum" collaborative framework [14]. - **Qianxun Intelligent**: Founded in 2024, it is a leading AI + robotics company with a focus on flexible object manipulation. Its Spirit V1 VLA model is the first to tackle long-range operations of flexible objects [16]. - **Star Motion Era**: A new tech company incubated by Tsinghua University, focusing on general artificial intelligence applications. Its ERA-42 model supports over 100 dynamic tasks through video training [18]. - **Zhujidi Power**: Concentrates on embodied intelligent robots, developing core technologies for hardware design, full-body motion control, and training paradigms [20]. International Companies - **Figure AI**: Focuses on embodied intelligence operation algorithms, enhancing data training and algorithm performance through video generation technology [17]. - **Physical Intelligence**: Founded in January 2023, it aims to develop advanced intelligent software for various robots. Its π0 model, released in October 2024, is a universal robot foundation model [22]. - **Google DeepMind**: Merged with Google Brain in 2023, it focuses on general artificial intelligence research. Its Gemini Robotics model can control robots to perform complex tasks without specialized training [20]. - **Skild AI**: A leading robotics "brain" development company in the US, aiming to create a universal robot operating system that enables intelligent operations across various scenarios [26].
千寻智能创始人韩峰涛:具身智能估值泡沫只是浪花
Core Insights - The article discusses the emergence of embodied intelligence in robotics, highlighting the advancements and commercialization potential of the Moz1 robot developed by Qianxun Intelligent [1][4][16] - The founder, Han Fengtao, emphasizes the importance of talent acquisition over funding in the startup phase, indicating a competitive landscape for top AI talent [3][8][9] - The article notes the significant interest from investors in the embodied intelligence sector, contrasting it with previous experiences in the industrial robotics field [10][12] Company Overview - Qianxun Intelligent was established in February 2024 and has since completed two rounds of financing, raising nearly 600 million yuan, led by JD.com [1] - The company aims to leverage advancements in AI, particularly large models, to enhance the capabilities of robots beyond traditional fixed-function industrial robots [4][10] Market Dynamics - The embodied intelligence sector is characterized by a high ceiling for market potential, with Chinese companies competing on a global scale, unlike the previous focus on domestic market share in industrial robotics [3][10] - The current penetration rate of industrial robots in China is only 3%, indicating substantial growth opportunities for embodied intelligence applications [12] Talent Acquisition - Han Fengtao highlights the challenges of finding suitable partners and talent, stating that he spent six months identifying potential co-founders and key team members [7][9] - The strategy includes mapping talent from top universities and companies, with a focus on high-caliber AI professionals [9] Commercialization and Application - The article suggests that the first wave of commercialization for embodied intelligence technologies is expected to begin in the coming year, with specific applications in high-value industries [1][16] - Han Fengtao believes that the initial applications will be in sectors with high profit margins and customer price tolerance, such as logistics and hazardous environments [13][16] Industry Trends - The article notes a shift in the robotics industry, with increased attention and investment in embodied intelligence, paralleling the growth seen in the electric vehicle sector [14] - The World Robot Conference in Beijing has expanded significantly, reflecting the growing interest and development in the robotics field [14][15]