Workflow
NavFoM
icon
Search documents
融资超3亿美元,估值超30亿美元!“北大系”人形机器人公司银河通用刷新具身智能单轮融资纪录
Hua Er Jie Jian Wen· 2025-12-19 09:17
银河通用宣布完成超3亿美元新一轮融资,这一金额刷新了具身智能领域的单轮融资纪录。 据19日公布的信息,本轮融资由中国移动链长基金领投,中金资本、中科院基金、苏创投、央视融媒体 基金、天奇股份等产业资本与投资平台联合注资。此外,交易还吸引了来自新加坡、中东的国际投资机 构及老股东的加注。融资完成后,银河通用的估值已达到30亿美元(约211.3亿人民币)。 这笔交易凸显了资本市场对具身智能赛道的持续押注。作为北京大学系的初创企业,银河通用在短短半 年内估值翻了三倍。此前一轮融资后,其外部估值约为10亿美元。随着新资金的注入,其累计融资额已 约达8亿美元。公司表示,资金将主要用于核心技术的持续投入、加速各领域解决方案的规模化落地与 迭代,以及拓展全球合作网络。 此次融资正值中国人形机器人企业加速迈向资本市场的关键窗口期。据报道,银河通用已于11月28日完 成股份制改革,目前正在筹备赴港上市,最早可能在明年首季向港交所递交申请,目标估值在30至40亿 美元之间。 "国家队"与产业资本重注 在完成本轮融资前,银河通用已在资本市场动作频频。公司于今年6月23日完成了由宁德时代领投的11 亿元新一轮融资,并在今年6月和11 ...
人形机器人最大融资背后,还拿下7亿大单
3 6 Ke· 2025-12-19 06:11
人形机器人,迎来一针暴力强心剂。 与宁德时代和溥泉资本领投的上一轮,也就是6月份那轮融资相比,投资方阵容变化不小,更多元化, 传了许久的中东资本也终于落地,能看出来银河通用在试图构建自己国际化的股东生态,而且继宁德时 代后又来了个央企"大家伙"——中国移动。 截至目前,银河通用累计获得的融资金额已经约8亿美元(合56亿人民币),而半年前这个数字还只有 24亿元。由于某种未知原因,银河通用还另有大额融资未公布。 投中网获悉,银河通用已完成新一轮融资,规模超过3亿美元(约合超21亿元人民币),投后估值超过 30亿美元(超200亿元人民币)。这两个数字意味着,国内人形机器人赛道的单笔最大融资额,以及估 值天花板,双双被刷新了。 再来看投资方,本轮融资由中国移动链长基金领投,中金资本、中科院基金、苏创投、央视融媒体基 金、天奇股份等投资平台及产业巨头,同时也获得新加坡及中东国际投资机构的注资及老股东追加投 资。 另外,投中网还独家获悉,银河通用已经与某单一产业方签订了一笔G1机器人采购合同,规模达到 1000台,如果按照G1约70万元的售价计算,合同金额将达到7亿元。 7亿元的订单,是什么概念?要知道宇树和智元这两家 ...
独家丨人形机器人最大融资背后,还拿下7亿大单
投中网· 2025-12-19 04:36
将投中网设为"星标⭐",第一时间收获最新推送 还有融资未公布。 作者丨张楠 曹玮钰 来源丨 投中网 人形机器人,迎来一针暴力强心剂。 投中网获悉,银河通用已完成新一轮融资,规模超过3亿美元(约合超21亿元人民币),投后估值超过30亿美元(超200亿元人 民币)。这两个数字意味着,国内人形机器人赛道的单笔最大融资额,以及估值天花板,双双被刷新了。 再来看投资方,本轮融资由中国移动链长基金领投,中金资本、中科院基金、苏创投、央视融媒体基金、天奇股份等投资平台 及产业巨头,同时也获得新加坡及中东国际投资机构的注资及老股东追加投资。 与宁德时代和溥泉资本领投的上一轮,也就是6月份那轮融资相比,投资方阵容变化不小,更多元化,传了许久的中东资本也 终于落地,能看出来银河通用在试图构建自己国际化的股东生态,而且继宁德时代后又来了个央企"大家伙"——中国移动。 截至目前,银河通用累计获得的融资金额已经约8亿美元(合56亿人民币),而半年前这个数字还只有24亿元。由于某种未知 原因,银河通用还另有大额融资未公布。 另外,投中网还独家获悉,银河通用已经与某单一产业方签订了一笔G1机器人采购合同,规模达到1000台,如果按照G1约 ...
如何构建通用具身导航大模型?
具身智能之心· 2025-11-20 00:03
点击下方 卡片 ,关注" 具身智能 之心 "公众号 >>直播和内容获取转到 → 具身智能之心知识星球 点击按钮预约直播 今天晚上我们邀请到了北京大学博士生张嘉曌作客具身智能之心,为大家直播分享他们团队在通用导航大模型领域的一系列前沿探索。 当前具身智能的导航研究多受限于特定任务与机器人平台,为突破这一局限, 他们团队的工作从跨任务的导航大模型Uni-NaVid,推进到跨本体的导航大模型 NavFoM,并成功应用于视觉避障、城区微出行与智能跟随等实际场景。 精彩看点 1.跨任务导航大模型: Uni-NaVid 2.跨任务跨本体导航大模型:NavFoM 3.导航大模型应用 : TrackVLA++, UrbanVLA, MM-Nav 面对非结构化、高动态环境以及需要语言理解的复杂任务,传统导航系统已难以满足需求。导航大模型的出现,将导航算法的范畴从专用能力拓展至通用智能移动 能力,为实现具身智能的落地开启了新的路径。欢迎前来聆听,共同探讨通用导航的未来发展。 参考材料 : Uni-Navid: https://pku-epic.github.io/Uni-NaVid/ NavFoM: https://pku-ep ...
银河通用全新模型统一机器人导航任务,7B参数模型支持实时部署
具身智能之心· 2025-11-10 00:02
Core Insights - The article discusses the development of NavFoM, a foundational model for embodied navigation that aims to unify navigation tasks across different robots and scenarios, marking a significant technological leap from specialized to general-purpose navigation [1][29]. Group 1: Unified Navigation Paradigm - NavFoM is based on a fundamental idea of unifying different robot navigation tasks into a common paradigm: streaming video input from robots combined with natural language navigation instructions to predict action trajectories [3]. - The model supports multiple tasks such as visual language navigation, target search, target following, and autonomous driving, across various environments including indoor and outdoor settings, and is applicable to different types of robots like quadrupeds, wheeled robots, humanoids, drones, and cars [3][29]. Group 2: Model Structure and Efficiency - The model features TVI Tokens, which provide a scalable method for understanding images under different tasks and camera settings, enhancing the model's adaptability [5]. - To enable real-time deployment of the 7B parameter navigation model, the team introduced the Budget-Aware Token Sampling Strategy (BATS), which adaptively samples key frames under computational constraints to maintain performance while ensuring efficient operation on real robots [6][11]. Group 3: Training Data and Performance - The team trained NavFoM on 8 million navigation data entries, including various tasks and robot types, as well as 4 million entries of open-world question-answering data, effectively doubling the training volume compared to previous works [12][15]. - NavFoM achieved state-of-the-art (SOTA) and SOTA-comparable results across multiple public benchmarks without requiring task-specific fine-tuning, demonstrating its versatility and effectiveness [16][29]. Group 4: Future Implications - The development of NavFoM signifies a move towards generalization in embodied navigation models, enabling cross-industry applications and fostering further research in intelligent navigation technologies [29]. - The team aims to inspire new technologies, datasets, and benchmarks in the field of embodied navigation, accelerating innovation in intelligent services and production capabilities [29].
计算机行业周报:Kimi K2 Thinking引领国产基模新突破-20251109
SINOLINK SECURITIES· 2025-11-09 12:29
Investment Rating - The report suggests a focus on leading domestic generative large model companies such as iFlytek, and highlights potential in AI hardware applications with recommendations for Hikvision, Rainbow Soft, and Hesai [3] Core Insights - The report emphasizes the ongoing advancements in AI capabilities, particularly with the introduction of models like Kimi K2 Thinking and NavFoM, which enhance reasoning and navigation tasks [5][12] - It notes that the computer sector is experiencing a recovery, with AI applications accelerating, and anticipates improved operating quality and cash flow in the upcoming quarters [12][18] - The report identifies high-growth areas for 2025, including AI computing power and LiDAR, while also highlighting stable growth in software outsourcing and financial IT [11][13] Summary by Sections Industry Perspective - The computer industry is expected to maintain high growth in AI computing and LiDAR, with accelerating growth in AI applications [11][13] - The report indicates that the overall operating intensity in the sector is expected to rise in the second half of the year, driven by new technology implementations and improved operational efficiency [12] Market Review - From November 3 to November 7, 2025, the computer industry index decreased by 2.54%, underperforming the CSI 300 index by 2.56 percentage points [14] - The report lists the top five companies in terms of stock performance during this period, indicating a mixed market response [15][22] Key Events Ahead - Upcoming events include the 2025 China Robot Industry Development Conference and the 27th China International High-tech Achievements Trading Conference, which are expected to present opportunities in the related industry chains [26][27]
银河通用全新模型统一机器人导航任务,7B参数模型支持实时部署
量子位· 2025-11-09 07:01
Core Viewpoint - The article discusses the development of NavFoM, a foundational model for embodied navigation that aims to unify navigation tasks across different robots and scenarios, moving from specialized to general-purpose navigation capabilities [1][20]. Group 1: Unified Navigation Paradigm - NavFoM is based on a fundamental idea of unifying navigation tasks for different robots into a common paradigm: streaming video input from robots combined with natural language navigation instructions to predict action trajectories [3][21]. - The model supports multiple tasks such as visual language navigation, target search, target following, and autonomous driving, across various environments including indoor and outdoor settings, and is applicable to different types of robots like quadrupeds, wheeled robots, humanoids, drones, and cars [3][21]. Group 2: Model Structure and Features - The model structure includes TVI Tokens, which provide a scalable method for the model to understand images under different tasks and camera settings [5]. - NavFoM employs a Budget-Aware Token Sampling Strategy (BATS) to adaptively sample key frames during navigation, ensuring efficient real-time deployment of the 7B parameter model while maintaining performance [6][11]. Group 3: Training Data and Performance - The team collected 8 million navigation data entries, including visual language navigation, target navigation, target tracking, and autonomous driving data, covering various robot types and scenarios [12][21]. - NavFoM achieved state-of-the-art (SOTA) and SOTA-comparable results across multiple public benchmarks without requiring task-specific fine-tuning, demonstrating its versatility and effectiveness [16][21]. Group 4: Future Implications - The development of NavFoM marks a significant step towards generalizing embodied intelligent navigation models, enabling scalable navigation technology across industries [20][21]. - The team aims to attract more attention to embodied navigation research and stimulate the emergence of new technologies, datasets, and benchmarks, facilitating innovation in intelligent services [21].
软银与OpenAI成立合资公司;宇树科技王兴兴:当下具身机器人发展阶段类似于ChatGPT发布前的1-3年左右丨AIGC日报
创业邦· 2025-11-06 00:08
Group 1 - Amazon has issued a cease-and-desist letter to Perplexity, accusing it of computer fraud for allowing its AI shopping agent, Comet, to shop on behalf of users without clear disclosure, violating Amazon's terms of service [2] - Perplexity responded by claiming that Amazon is using its competitive products to suppress smaller rivals and that users should have the right to choose their preferred AI shopping agents [2] - The launch of NavFoM, a navigation foundation model by Galaxy General in collaboration with several universities, supports both indoor and outdoor scenarios and can adapt to various robotic platforms [2] Group 2 - SoftBank and OpenAI have established a joint venture named "SB OAI Japan" to exclusively promote Crystal Intelligence in Japan, with plans for an IPO in 2026 [2] - The founder of Yushu Technology, Wang Xingxing, stated that the current development stage of embodied robots is similar to the first 1-3 years before the release of ChatGPT, emphasizing the importance of large models in robotics [2]
多任务、全场景、跨本体通用移动:银河通用发布环视导航基座大模型
具身智能之心· 2025-11-06 00:03
Core Viewpoint - The article discusses the advancements in navigation models for robots, particularly focusing on the launch of the NavFoM (Navigation Foundation Model) by Galaxy General Robotics, which represents a significant leap in the capabilities of robotic navigation systems, allowing for more autonomous and adaptable robots in various environments [3][9][27]. Group 1: Technological Advancements - The NavFoM is the world's first cross-entity panoramic navigation foundation model, unifying various navigation tasks such as Vision-and-Language Navigation, Object-goal Navigation, Visual Tracking, and Autonomous Driving into a single framework [3][9]. - NavFoM allows robots to autonomously perceive their environment and make navigation decisions in unknown settings, moving beyond simple following tasks [9][10]. - The model employs a unified learning paradigm that enables knowledge sharing across different tasks and robot forms, enhancing the efficiency of training and application [13][14]. Group 2: Key Features - NavFoM supports both indoor and outdoor scenarios, operates in zero-shot conditions without the need for mapping or additional training data, and can adapt to various robot types, including quadrupeds, wheeled humanoids, drones, and cars [11][12]. - The model incorporates two key innovations: TVI Tokens for understanding time and direction, and BATS strategy for efficient sampling of video data, allowing for real-time responses while conserving computational resources [17][19]. - The training dataset for NavFoM includes over 8 million cross-task navigation data points and 4 million open-ended question-answer pairs, significantly enhancing its learning capabilities [21][23]. Group 3: Application and Impact - NavFoM has demonstrated state-of-the-art performance in various international benchmarks, showcasing its ability to generalize across tasks and environments without the need for task-specific fine-tuning [25]. - The model has successfully driven various robot forms to execute complex tasks, marking a significant step towards the realization of embodied intelligence in navigation systems [25][27]. - The introduction of NavFoM is seen as a foundational element for a comprehensive navigation system that can support a wide range of applications, from indoor navigation to urban environments, effectively transforming robotic capabilities [29][30].
腾讯研究院AI速递 20251106
腾讯研究院· 2025-11-05 16:01
Group 1: Generative AI Developments - Google announced Project Suncatcher, planning to launch two prototype satellites with Trillium TPU by early 2027, utilizing solar energy for AI computation [1] - Anthropic introduced a new paradigm called "code execution," reducing token consumption from 150,000 to 2,000, achieving a 98.7% efficiency improvement [2] - Open-Sora Plan company launched Uniworld V2, excelling in Chinese language processing and detail control, outperforming OpenAI's GPT-Image-1 in benchmarks [3] Group 2: Browser and AI Integration - QQ Browser's version 19.8.0 introduced an "AI+" floating window feature integrating 14 AI tools for various tasks, enhancing user experience [4] Group 3: Geographic AI Enhancements - Google upgraded Earth AI with new foundational models for remote sensing, demographic dynamics, and environmental analysis, significantly improving performance metrics [5][6] Group 4: Robotics Innovations - Xiaopeng showcased the next-generation IRON humanoid robot with 82 degrees of freedom and a total computing power of 2250 TOPS, setting a new standard in humanoid robotics [7] - Generalist launched a new embodied foundational model GEN-0, trained on over 270,000 hours of real-world data, demonstrating significant advancements in robotic capabilities [8] Group 5: Navigation and AI Collaboration - Galaxy Generalist collaborated with multiple universities to introduce the NavFoM model, unifying various navigation tasks and enhancing spatial understanding [9] Group 6: Startup Methodologies - ElevenLabs operates with 350 employees divided into 20 autonomous product teams, each required to achieve product-market fit within six months or face dissolution [10]