机器人感知
Search documents
RoboSense速腾聚创(02498):真正的机器人操作之眼AC2将登陆CES
智通财经网· 2026-01-06 03:46
智通财经APP获悉,1月5日,据RoboSense速腾聚创官方信息,真正的机器人操作之眼AC2将在CES 2026中面向全球机器人市场展出。 AC2可为机器人提供底层硬件融合的3D空间感知和6自由度运动信息,在8米的探测范围内保持±5mm的稳定测距精度*,结合1600×1200高分辨率RGB图像, 可清晰还原三维细节,感知细小物体。 AC2采用超大FOV设计,无论是融合感知,还是dToF、双目、RGB等单一传感器模式,均能提供120°×90°的一致大视野,较传统3D相机提升70%以上。得益 于RoboSense多传感器芯片级硬件同步控制技术,AC2的融合感知同步精度<;1ms,极致对齐多种信息。同时,采用全局快门,杜绝单帧内果冻效应,进一 步提升同步精度,让操作行云流水。 据悉,AC2是业界首款同时集成全固态dToF激光雷达、双目RGB相机、IMU的超级传感器系统,将帮助目前机器人突破跨复杂场景精细操作难题,可广泛应 用于人形机器人、仓储AGV、家庭机器人、数字孪生等场景。 RoboSense速腾聚创AI-Ready生态适配AC2使用场景,新增姿态估计、人体骨架识别等开源算法。AI-Ready生态不仅提供AC ...
RoboSense速腾聚创:真正的机器人操作之眼AC2将登陆CES
Ge Long Hui· 2026-01-06 03:34
2026年1月5日,据RoboSense速腾聚创官方信息,真正的机器人操作之眼AC2将在CES 2026中面向全球 机器人市场展出。 据悉,AC2是业界首款同时集成全固态dToF激光雷达、双目RGB相机、IMU的超级传感器系统,将帮 助目前机器人突破跨复杂场景精细操作难题,可广泛应用于人形机器人、仓储AGV、家庭机器人、数 字孪生等场景。 AC2可为机器人提供底层硬件融合的3D空间感知和6自由度运动信息,在8米的探测范围内保持±5mm的 稳定测距精度*,结合1600×1200高分辨率RGB图像,可清晰还原三维细节,感知细小物体。 AC2采用超大FOV设计,无论是融合感知,还是dToF、双目、RGB等单一传感器模式,均能提供 120°×90°的一致大视野,较传统3D相机提升70%以上。得益于RoboSense多传感器芯片级硬件同步控制 技术,AC2的融合感知同步精度<1ms,极致对齐多种信息。同时,采用全局快门,杜绝单帧内果冻效 应,进一步提升同步精度,让操作行云流水。 AC2采用耐磨玻璃盖板,整机尺寸仅有L102×H32×D45(mm),可以轻松嵌入人形机器人脸部、机器狗头 部、机械臂末端等狭窄空间。65mm ...
速腾聚创:博士创业“看见”未来
Shen Zhen Shang Bao· 2025-11-24 01:46
Core Insights - RoboSense has launched the AC2, the industry's first integrated super sensor system that enables high-precision, wide-range perception for robots in all scenarios [1] - The company has evolved from a single hardware manufacturer to a comprehensive AI robotics ecosystem provider, showcasing the "Shenzhen speed" in its growth trajectory [1] Company Development - RoboSense was founded in 2014 by Qiu Chunxin, who identified limitations in traditional cameras for outdoor mobile robots and aimed to develop a higher-performance perception system [2] - The company adopted a dual development strategy of "AI software-perception algorithms" and "hardware-sensors," enabling it to provide system-level solutions from its early days [2] - In 2016, RoboSense launched its first 16-line mechanical LiDAR, achieving mass production in 2017, which supported advancements in both autonomous driving and robotics [2] Technological Advancements - RoboSense developed the world's first automotive-grade MEMS solid-state LiDAR, the M1, which achieved mass production in June 2021, filling a significant market gap [3] - The company has a high production capacity, delivering one LiDAR every 12 seconds, which has attracted major automotive clients [4] Market Position - As of January 2024, RoboSense became the first global LiDAR company to be listed on the Hong Kong Stock Exchange, with over 880,000 LiDAR units delivered and partnerships with 28 automotive companies [4] - RoboSense's vision extends beyond automotive applications, aiming to become a leading global robotics technology platform [4] Robotics Applications - Since 2017, RoboSense has provided LiDAR and algorithm support for various robotic applications, including unmanned delivery and inspection [5] - The company has collaborated with Alibaba's Cainiao to launch unmanned logistics vehicles and has served over 3,200 robotic clients across multiple sectors [5] Strategic Shift - On January 5, 2024, RoboSense announced its AI robotics strategy, introducing AI infrastructure and new products, marking a transition from a LiDAR supplier to an AI robotics technology platform [6][7] - The release of the Active Camera and the upgraded AC2 super sensor system signifies the company's commitment to reducing development barriers for robotics and enhancing intelligent capabilities [7]
清华系+清华造,一双机器人的眼睛融资了
3 6 Ke· 2025-11-04 03:32
Core Insights - The current investment trend in the "embodied intelligence" sector remains strong, with significant increases in financing events and amounts compared to earlier periods [2][3] - The recent financing round for Saiguan Intelligent, focused on "robot perception," indicates a shift towards upstream components in the robotics industry, emphasizing the need for advanced technologies and strategic expansion [4][10] Investment Trends - In the first ten months of 2025, there were 195 financing events in the embodied intelligence sector, with 69 events exceeding 100 million yuan, showing a substantial increase from the previous seven months [2] - The investment landscape is characterized by a rapid pace and a focus on key components such as robotic joints and flexible materials, as leading companies build competitive advantages [3][4] Company Developments - Saiguan Intelligent recently completed a Pre-A financing round led by Lingge Venture Capital, with participation from Hengdian Capital and existing shareholder Yuanqiao Capital, aimed at enhancing core technology and expanding product lines [4][10] - The company aims to develop a new laser radar technology architecture to provide high-density point cloud data for robots, enhancing their perception capabilities in dynamic environments [7][11] Technological Innovations - Saiguan Intelligent's approach combines multi-modal fusion with a localized computing platform to create an integrated perception-decision system for robots, which is expected to improve operational efficiency [6][9] - The company's Octa series products are designed to offer precise perception capabilities for various industrial robots, aligning with the demand for comprehensive sensing solutions in the robotics sector [7][11] Strategic Partnerships - The involvement of investors with strong ties to top universities and industrial backgrounds, such as Lingge Venture Capital and Hengdian Capital, underscores the strategic importance of advanced manufacturing and robotics perception technologies [10][11] - Yuanqiao Capital's continued investment reflects confidence in the growth potential of core robotic components, highlighting the expected market expansion and technological innovation opportunities [10]
奥普特(688686):机器视觉龙头多行业景气向好
HTSC· 2025-08-24 07:35
Investment Rating - The investment rating for the company is maintained at "Buy" with a target price of RMB 132.00 [1][5]. Core Views - The company reported a revenue of RMB 683 million for H1 2025, representing a year-on-year increase of 30.68%, with a net profit of RMB 146 million, up 28.80% year-on-year [1][2]. - The company is expected to benefit from the recovery in the lithium battery and 3C industries, driven by the upgrade of consumer electronics and the resurgence of these sectors [1][3]. - The machine vision industry is projected to grow at an average annual rate of around 20% over the next five years, with the Chinese market expected to exceed RMB 21 billion by 2025 [3]. Summary by Sections Financial Performance - In H1 2025, the company achieved a gross margin of 65.47%, slightly down by 0.53 percentage points year-on-year. The operating cash flow increased significantly by 1123.58% due to improved collections [2][3]. - The revenue breakdown for H1 2025 shows growth across major sectors: 3C industry revenue increased by 23.82%, lithium battery revenue rose by 49.35%, semiconductor revenue grew by 25.51%, and automotive revenue surged by 65.67% [3]. Business Development - The company is expanding into the robotics sector, aiming to become a core supplier of perception solutions for robots, leveraging its advanced vision technologies [4]. - The company has established a robotics division and is developing key visual components for various robotic applications, including dToF cameras and laser radar [4]. Profit Forecast and Valuation - The profit forecast for 2025-2027 has been adjusted, with net profits projected at RMB 201.59 million, RMB 246.79 million, and RMB 303.80 million respectively, reflecting a downward revision due to previously optimistic expense and margin expectations [5][17]. - The company is assigned a PE ratio of 80 times for 2025, with a target price of RMB 132.00, indicating a strong growth outlook compared to peers [5][12].
商道创投网·会员动态|环视智能·完成千万级天使轮融资
Sou Hu Cai Jing· 2025-08-05 16:05
Group 1 - The core viewpoint of the news is that Huanxi Intelligent has successfully completed a multi-million angel round financing, which will be used to enhance its spatial intelligence hardware capabilities and improve operational precision in various scenarios [2][3][4] - Huanxi Intelligent, founded in 2024 in Tianjin, focuses on low-cost, low-computational, and high-dynamic spatial intelligent hardware, achieving millimeter-level spatial computation and autonomous navigation [2][3] - The funding will be allocated to two main areas: reducing depth error within 10 meters to 2 centimeters in six months and developing a "spatial intelligence + world model" framework to enable robots to predict dynamic risks [3][4] Group 2 - The investment rationale is based on Huanxi Intelligent's ability to address the high cost of perception in robotics through hardware acceleration, with a complete execution capability from chip modules to scene implementation [4] - Recent government policies, including the Ministry of Industry and Information Technology's "Robot+" application action plan, support the growth of the robotics industry, aligning with Huanxi Intelligent's technology commercialization efforts [5] - The platform encourages fund managers to invest in critical algorithms, scene validation, and supply chain security, promoting a collaborative approach to risk and profit sharing in the robotics perception industry [5]
RoboSense 2025 机器感知挑战赛正式启动
具身智能之心· 2025-06-25 13:52
Core Viewpoint - The RoboSense Challenge 2025 aims to systematically evaluate the perception and understanding capabilities of robots in real-world scenarios, addressing the limitations of traditional perception algorithms in complex environments [1][44]. Group 1: Challenge Overview - The challenge is organized by multiple prestigious institutions, including National University of Singapore and University of Michigan, and is officially recognized as part of IROS 2025 [5]. - The competition will take place in Hangzhou, China, with key dates including registration starting in June 2025 and award decisions on October 19, 2025 [3][46]. Group 2: Challenge Tasks - The challenge includes five real-world tasks focusing on various aspects of robotic perception, such as language-driven autonomous driving, social navigation, sensor placement optimization, cross-modal drone navigation, and cross-platform 3D object detection [6][9]. - Each task is designed to test the robustness and adaptability of robotic systems under different conditions, emphasizing the need for innovative solutions in perception and understanding [44]. Group 3: Technical Features - The tasks require the development of end-to-end multimodal models that integrate visual sequences with natural language instructions, aiming for deep coupling between language, perception, and planning [7]. - The challenge emphasizes the importance of robust performance in dynamic environments, including the ability to handle sensor placement variations and social interactions with humans [20][28]. Group 4: Evaluation Metrics - The evaluation framework includes multiple dimensions such as perception accuracy, understanding through visual question answering (VQA), prediction of trajectories, and planning consistency with language commands [9][22]. - Baseline models and their performance metrics are provided for each task, indicating the expected computational resources and training requirements [13][19][39]. Group 5: Awards and Incentives - The challenge offers a total prize pool exceeding $10,000, with awards for first, second, and third places, as well as innovation awards for outstanding contributions in each task [40][41]. - All teams that complete valid submissions will receive official participation certificates, encouraging widespread engagement in the competition [41].