物理AGI
Search documents
3个月连融5亿!具身智能公司极佳视界完成2亿元A2轮融资,推出物理AGI的原生模型与原生本体
机器人圈· 2025-12-10 09:37
近日, 具身智能公司极佳视界再获2 亿元 A2 轮融资 , 距上一轮融资发布仅一个月。 据悉,本轮融资 由达晨财智领投, 老股东华 控基金联合领投,首发展创投、浦耀信晔、财鑫资本、珠海 科技产业集团、张科垚坤、复琢创投等知名机构跟投,老股东合鼎共资本超额跟投。 此前,极佳视界已分别完成 Pre-A 、 Pre-A+ 、 A1 连续三轮融资, 3 个月内连续完成 4 轮 累计 5 亿 元 A 轮系列融资。 作为一家专注物理世界通用智能的公司,极佳视界不仅面向物理 AGI 的目标推出原生模型,也已于 2025 年 11 月 26 日发布相应本体,布局物理 AGI 的终端业务。 具体而言,公司产品包括世界模型平台 GigaWorld (驾驶和具身)、通用具身大脑 GigaBrain 、通用具 身本体 Maker 等物理 AI 全栈软硬件产品。 产品矩阵从软硬件两端,系统化 布局 了物理 AI 的未来发展 路径。 模型方面,极佳视界提出了" 世界模型 + 行动模型 + 强化学习 " 的原生范式,这其中每一环节均以世界 模型为驱动。 当前,模型架构正向 通用行动模型(如 VLA 与世界行动模型)收敛 ;数据来源 转向以 ...
达晨财智领投 极佳视界完成2亿元A2轮融资
Xin Lang Cai Jing· 2025-12-08 15:14
极佳视界认为,物理AI正在进入全新的关键时代,未来2-3年是物理AGI突破的关键窗口期。在世界模 型和行动模型的不断突破下,物理世界的"ChatGPT时刻"正在加速到来。 来源:上海证券报·中国证券网 上证报中国证券网讯(记者 孙小程)近日,具身智能公司极佳视界再获新一轮投资。本轮为2亿元A2轮 融资,由达晨财智领投,老股东华控基金联合领投,首发展创投、浦耀信晔、财鑫资本、珠海科技产业 集团、张科垚坤、复琢创投等知名机构跟投,老股东合鼎共资本超额跟投。 此前,极佳视界已分别完成Pre-A、Pre-A+、A1连续三轮融资,3个月内连续完成4轮,累计5亿元A轮系 列融资。 作为一家专注物理世界通用智能的公司,极佳视界不仅面向物理AGI的目标推出原生模型,也已于2025 年11月26日发布相应本体,布局物理AGI的终端业务。 具体而言,公司产品包括世界模型平台GigaWorld(驾驶和具身)、通用具身大脑GigaBrain、通用具身 本体Maker等物理AI全栈软硬件产品。产品矩阵从软硬件两端,系统化布局了物理AI的未来发展路径。 模型方面,极佳视界提出了"世界模型+行动模型+强化学习"的原生范式,这其中每一环节均 ...
极佳视界完成2亿元A2轮融资 达晨、华控领投
Zheng Quan Shi Bao Wang· 2025-12-08 13:38
此前,极佳视界已分别完成Pre-A、Pre-A+、A1连续三轮融资,3个月内连续完成4轮累计5亿元A轮系列 融资。 距上一轮融资发布仅一个月,极佳视界近日再获新一轮投资。 12月8日,极佳视界宣布近日获新一轮投资,本轮为2亿元A2轮融资,由达晨财智领投,老股东华控基 金联合领投,首发展创投、浦耀信晔、财鑫资本、珠海科技产业集团、张科垚坤、复琢创投等知名机构 跟投,老股东合鼎共资本超额跟投,庚辛资本中国担任财务顾问。 鉴于原生模型的重要性,以"操作与上肢"为中心、能更好与物理世界交互、数据优先的原生本体日益成 为关键需求。"传感器—执行器—数采设备—通用模型"之间可规模化的闭环迭代,也愈发凸显其价值。 公开资料显示,极佳视界成立于2023年,聚焦物理AI,专注于"世界模型驱动的物理世界通用智能"。其 产品包括世界模型平台GigaWorld(驾驶和具身)、具身基础模型GigaBrain、通用具身本体Maker等物理AI 全栈软硬件产品。 极佳视界认为,物理AI正在进入全新的关键时代,未来2—3年是物理AGI突破的关键窗口期。在世界模 型和行动模型的不断突破下,物理世界的"ChatGPT时刻"正在加速到来,原生模型 ...
3个月连融5亿!极佳视界A2轮获2亿,推出物理AGI原生模型与本体
3 6 Ke· 2025-12-08 07:56
据悉,本轮为2亿元A2轮融资,由达晨财智领投,老股东华控基金联合领投,首发展创投、浦耀信晔、 财鑫资本、珠海科技产业集团、张科垚坤、复琢创投等知名机构跟投,老股东合鼎共资本超额跟投。 目标物理世界的"ChatGPT时刻"。 距上一轮融资发布仅一个月,具身智能公司「极佳视界」近日再获新一轮投资。 此前,极佳视界已分别完成Pre-A、Pre-A+、A1连续三轮融资,3个月内连续完成4轮累计5亿元A轮系列 融资。 作为一家专注物理世界通用智能的公司,极佳视界不仅面向物理AGI的目标推出原生模型,也已于2025 年11月26日发布相应本体,布局物理AGI的终端业务。 具体而言,公司产品包括世界模型平台GigaWorld(驾驶和具身)、通用具身大脑GigaBrain、通用具身 本体Maker等物理AI全栈软硬件产品。产品矩阵从软硬件两端,系统化布局了物理AI的未来发展路径。 模型方面,极佳视界提出了"世界模型+行动模型+强化学习"的原生范式,这其中每一环节均以世界模 型为驱动。 当前,模型架构正向通用行动模型(如VLA与世界行动模型)收敛;数据来源转向以真机数据与世界 模型生成数据为核心;学习方式则形成「模仿学习+强化 ...
达晨、华控领投,极佳视界A2轮再融2亿,押注“世界模型+行动模型”原生架构
Tai Mei Ti A P P· 2025-12-08 07:17
图片来源:网络 具身智能领军企业极佳视界宣布完成新一轮融资。 近日,极佳视界近日完成2亿元A2轮融资,由达晨财智领投,老股东华控基金联合领投,首发展创投、 浦耀信晔、财鑫资本、珠海科技产业集团、张科垚坤、复琢创投等知名机构跟投,老股东合鼎共资本更 超额追加投资。至此,在短短3个月内,公司已连续完成Pre-A、Pre-A+、A1及A2四轮融资,累计融资 额达5亿元,充分彰显资本市场对其技术路径与商业化前景的高度认可。 公司创始人兼CEO黄冠博士为清华大学自动化系创新领军工程博士,曾担任地平线机器人视觉感知技术 负责人、鉴智机器人合伙人兼算法副总裁,并拥有三星中国研究院、微软亚洲研究院等全球顶尖科研机 构的深厚背景。过去十年,他全程深度参与并推动了物理AI从技术萌芽到产业落地的关键演进,带领 团队屡次在FRVT、COCO、VOT等全球最具影响力的视觉AI竞赛中斩获冠军,并实现多项技术的大规 模产业化应用。 在自动驾驶领域,其团队提出的BEVDet系列方法已成为全球最具影响力的BEV感知范式之一,长期稳 居nuScenes榜单首位,并成功实现规模化量产;此外,团队还主导了地平线AIDI平台——业内最大规模 的数据闭 ...
智源发布具身数据创新基座,携手行业共筑物理AGI基础设施
具身智能之心· 2025-12-03 03:47
Core Insights - The article discusses the launch of the RoboXstudio platform and the RoboCOIN dataset by the Beijing Zhiyuan Artificial Intelligence Research Institute, aimed at addressing challenges in embodied data production and enhancing research efficiency in embodied intelligence [6][19]. Group 1: Challenges in Embodied Data - Embodied data faces three main challenges: data silos, lack of quality control, and high costs [7][8]. - Data silos arise from non-standardized formats and isolated data tools, complicating data processing [7]. - Quality control issues include frame loss, stuttering, and timestamp misalignment, leading to unreliable data records [8]. - The cost of generating embodied data remains high due to reliance on manual operations and the absence of mature platforms for scalability [8]. Group 2: CoRobot Software Framework - The CoRobot framework was developed to standardize operations, improve quality, and enhance efficiency in embodied data management [10]. - It consists of five components: data collection tools, format conversion tools, data processing tools, data management tools, and model training tools [10]. Group 3: RoboCOIN Dataset - The RoboCOIN dataset is a collaboration involving multiple companies and universities, designed to be the global benchmark for dual-arm robot data [14][16]. - It features the largest number of dual-arm entities, with 180,000 data entries covering over ten scenarios, including industrial and retail applications [16]. - The dataset is noted for its fine-grained labeling and ease of use, facilitated by the CoRobot framework [16]. Group 4: RoboXstudio Platform - The RoboXstudio platform aims to streamline the entire process of data collection, annotation, management, model training, evaluation, and deployment [19][22]. - It supports diverse robot types and tasks, ensuring comprehensive data collection without gaps [22]. - The platform integrates open-source frameworks and multimodal models to reduce operational costs and enhance user accessibility [22]. Group 5: Open Source and Collaboration - The Zhiyuan Institute emphasizes the importance of collaborative innovation in advancing artificial intelligence, with a significant number of downloads of their open-source models [23]. - The RoboCOIN dataset and CoRobot framework are made available to the public to foster industry-wide collaboration and innovation [23][25].
万字长文聊具身智能“成长史”:具身智能跨越了哪些山海,又将奔向哪里
自动驾驶之心· 2025-08-10 03:31
Core Viewpoint - The article discusses the rapid advancements in embodied intelligence and robotics, emphasizing the need for robots to integrate AI with physical capabilities to perform tasks that are currently challenging for them, such as simple actions that children can do [8][9]. Group 1: Evolution of Embodied Intelligence - Over the past decade, embodied intelligence has evolved significantly, with a focus on integrating AI into robots' control systems to enhance their performance in the physical world [9]. - The gap between research prototypes and practical applications is highlighted, with a need for robots to reach a Technology Readiness Level (TRL) of 8 to 9 for industrial acceptance [10]. - Machine learning advancements, including better sensors and algorithms, have led to substantial improvements in robotics, but achieving high success rates in real-world applications remains a challenge [12][14]. Group 2: Opportunities and Challenges in Robotics - The current landscape presents both opportunities and challenges for robotics, with a focus on structured environments for initial applications before tackling more complex, unstructured settings [14][17]. - The importance of scalable learning systems in robotics is emphasized, as researchers aim to leverage data from multiple robots to enhance performance across various tasks [20]. Group 3: Specialized vs. General Intelligence - The discussion contrasts Artificial Specialized Intelligence (ASI) with Artificial General Intelligence (AGI), suggesting that while ASI focuses on high performance in specific tasks, AGI aims for broader capabilities [27][29]. - The advantages of specialized models include efficiency, robustness, and the ability to run on-premise, while general models offer greater flexibility but are more complex and costly to operate [31][35]. Group 4: Future Directions in Robotics - The emergence of visual-language-action (VLA) models, such as RT-2, represents a significant step forward in robotics, allowing for more complex task execution through remote API calls [44][46]. - The development of the RTX dataset, which includes diverse robotic data, has shown that cross-embodied models can outperform specialized models in various tasks, indicating the potential for generalization in robotics [47][48]. - The second-generation VLA models, like PI-Zero, are designed to handle continuous actions and complex tasks, showcasing advancements in robot dexterity and adaptability [49][50]. Group 5: Data and Performance in Robotics - The importance of data in achieving high performance in robotics is underscored, with a call for large-scale data collection to support the development of robust robotic systems [62][70]. - The article concludes with a discussion on the need for a balance between performance and generalization in robotics, suggesting that achieving high performance is crucial for real-world deployment [66][68].
对话智源王仲远:机器人的大小脑可能会“合体”,但不是今天
AI前线· 2025-06-11 08:39
Core Insights - The article discusses the launch of the "Wujie" series of large models by Zhiyuan Research Institute, focusing on advancements in multi-modal AI technology and its applications in physical AGI [1][2][3] Group 1: New Model Launch - The "Wujie" series includes several models such as Emu3, Brainμ, RoboOS2.0, RoboBrain2.0, and OpenComplex2, aimed at enhancing AI's understanding and interaction with the physical world [1][2] - Emu3 is designed as a native multi-modal architecture that enables large models to comprehend and reason about the world, set to be released in October 2024 [3][4] Group 2: Technological Advancements - Brainμ, based on Emu3, integrates various brain signals to perform multiple neuroscience tasks, demonstrating significant performance improvements over existing models [4][5] - RoboOS2.0 is the first open-source framework for embodied intelligence, allowing seamless integration of skills from various robot models, with a 30% performance enhancement compared to its predecessor [6][7] Group 3: Applications and Collaborations - Brainμ has potential applications in brain-computer interfaces, having successfully reconstructed sensory signals using portable EEG systems [5] - The OpenComplex2 model represents a breakthrough in dynamic conformational modeling of biological molecules, enhancing the understanding of molecular interactions at atomic resolution [11][12] Group 4: Future Directions - The article emphasizes the ongoing evolution of large model technology, with a focus on bridging the gap between digital and physical worlds, which is crucial for achieving physical AGI [2][3] - RoboBrain2.0 has improved task planning and spatial reasoning capabilities, achieving a 74% increase in task planning accuracy compared to its predecessor [8][9]
对话智源研究院院长王仲远:AI正加速从数字世界走向物理世界
2 1 Shi Ji Jing Ji Bao Dao· 2025-06-08 11:49
Core Insights - The rapid advancement of AI technology is shifting from digital to physical applications, with a focus on humanoid robots as practical tools rather than mere mascots [1][2] - The development trajectory of large models is moving towards multi-modal world models, which aim to enhance AI's understanding and interaction with the physical world [2][3] AI Technology Development - The performance of large language models is reaching a bottleneck, necessitating improvements through reinforcement learning, high-quality synthetic data, and activation of underutilized multi-modal data [1][2] - The introduction of the "Wujie" series of large models, including the Emu3 multi-modal world model, signifies a strategic shift towards understanding physical causal relationships [2][3] Embodied Intelligence - Humanoid robots are recognized for their long-term value due to their design compatibility with human environments and the availability of extensive human behavior data for model training [3][4] - The current limitations in data volume hinder the training of models that integrate both "big brain" and "small brain" functionalities, indicating a need for further development [4][6] Industry Trends - The focus on embodied intelligence is expected to prioritize applications in controlled environments, such as logistics and repetitive tasks, where safety and efficiency are paramount [3][4] - The concept of "big brain" and "small brain" integration is acknowledged as a potential future trend, but current data limitations prevent immediate implementation [4][5] AGI Development - The emergence of Agents in AI signifies a new phase where foundational models can support the development of various applications, akin to mobile apps in the internet era [5][6] - The industry is still in the early stages of embodied intelligence development, facing challenges similar to those encountered in the early days of AI large models [5][6]
智源发布“悟界”系列大模型,宣布围绕物理AGI进行布局
Xin Lang Ke Ji· 2025-06-06 02:51
Group 1 - The core viewpoint of the news is the launch of the "Wujie" large model series by the Beijing Zhiyuan Artificial Intelligence Research Institute, focusing on advancements in physical AGI and breaking the boundaries between virtual and real worlds [1] - The "Wujie" series includes four models: Emu3, Brainμ, RoboBrain 2.0, and OpenComplex2, each targeting different aspects of multi-modal learning and applications in neuroscience [1] - Emu3, set to be released in October 2024, utilizes a novel token prediction paradigm for unified multi-modal learning, allowing for the encoding of images/videos into discrete symbol sequences that are isomorphic to text [1] Group 2 - Brainμ is built on the Emu3 architecture and tokenizes brain signals from various neuroscience modalities, enabling multi-directional mapping between brain signals and other modalities like text and images [2] - The model has been pre-trained on over 1 million units of neural signals and aims to support a wide range of applications in neuroscience, from basic research to clinical studies and brain-computer interfaces [2] - Collaborations with leading neuroscience laboratories and research teams in China, including institutions like Tsinghua University and Peking University, are being established to expand the scientific and industrial applications of Brainμ [2]