Workflow
纯视觉机器人操作
icon
Search documents
锦秋基金被投公司地瓜机器人提出纯视觉机器人操作方法VO-DP | Jinqiu Spotlight
锦秋集· 2025-09-22 07:15
Core Insights - Jinqiu Capital has completed its investment in Digua Robot, focusing on long-term investment in AI startups with breakthrough technologies and innovative business models [1][38] - Digua Robot is a leading provider of general-purpose robot hardware and software, with a comprehensive product system covering chips, algorithms, and software, catering to various robotic applications [2] Investment Overview - Jinqiu Capital, an AI-focused fund, emphasizes a long-term investment philosophy and seeks to support startups with transformative technologies [1] - The investment in Digua Robot aligns with this philosophy, as the company has developed a robust platform for intelligent robotics [1] Company Profile - Digua Robot, established in 2015, has created a complete product ecosystem from chips to software, enabling diverse robotic applications [2] - The company has shipped over 5 million units of its Xuri intelligent computing chips and has engaged with over 200 small and medium-sized enterprises and 200+ leading universities globally [2] Technological Advancements - Digua Robot has introduced a new visual robot operation method called VO-DP, which enhances operational precision and demonstrates the potential of pure visual perception in robotics [2][4] - The VO-DP method integrates semantic and geometric features, significantly improving the performance of robotic manipulation tasks [6][16] Research and Development - The focus on end-to-end robotic operation learning is crucial in the field of embodied intelligence, with current mainstream technologies divided into Vision-Action Models (VA) and Vision-Language-Action Models (VLA) [4][7] - The research highlights the limitations of VLA and emphasizes the foundational importance of VA modeling for understanding action prediction [8] Experimental Results - The VO-DP method has shown competitive performance in simulations, achieving accuracy levels comparable to traditional 3D methods [5][30] - Experiments indicate that the integration of DINOv2 and VGGT-AA features significantly enhances the success rate of robotic manipulation tasks [22][25] Future Directions - The company plans to expand its research to multi-view representations and dynamic predictions, aiming to improve robustness and task success rates in complex environments [39]