Workflow
3D Vision
icon
Search documents
企业借力政策与资本“双翼”做大做强 梯度培育激活万千中小企业创新引擎
Yang Shi Wang· 2025-11-28 06:36
Core Insights - Guangdong's enterprises are crucial participants in market economic activities, with over 9 million enterprises expected by September 2025, and 41.46 million of these being new economy enterprises, accounting for 37.92% of new enterprises in the province [1][3] Innovation Capacity - Guangdong ranks first in regional innovation capacity in China for 2025, maintaining this position for nine consecutive years, supported by a comprehensive innovation chain that includes basic research, technology breakthroughs, results transformation, technology finance, and talent support [3] Development of SMEs - Guangdong has successfully nurtured 43,498 innovative SMEs and 2,617 specialized "little giant" enterprises, establishing a gradient cultivation system for high-quality SMEs [5][17] Talent Attraction - The "Million Talents Gather in Nanyue" initiative launched in February 2025 aims to attract and retain talent, with Guangdong's skilled workforce reaching 22.01 million, including 8.27 million high-skilled workers [14] Economic Transformation - The opening of the Shenzhen-Jiangmen tunnel in 2024 has facilitated a shift from traditional manufacturing to emerging industries like robotics and semiconductors in Jiangmen, enhancing collaboration between Shenzhen's R&D and Jiangmen's production [15] Policy Support - Guangdong's government has implemented a structured guiding fund system to support innovation, creating a regional innovation network that amplifies the scale and radiation effects of entrepreneurship [17][18]
【研选行业+公司】这项关键技术成AI数据中心降本核心!国产厂商迎新风口
第一财经· 2025-09-18 12:59
Group 1 - The key technology is becoming the core of cost reduction for AI data centers, with AIDC catalyzing an upward trend in industry prosperity, and related businesses of industry giants have grown by 300% [1] - Domestic magnetic levitation compressor manufacturers are standing at a new opportunity [1] Group 2 - The 3D vision sector is rapidly expanding with a CAGR of 13.1%, and a specific company has a market share of 70% [1] - The company's performance has reached an inflection point, and its PE ratio is expected to drop by 200 times in two years [1]
SpatialTrackerV2:开源前馈式可扩展的3D点追踪方法
自动驾驶之心· 2025-07-20 08:36
Core Viewpoint - The article discusses the development of SpatialTrackerV2, a state-of-the-art method for 3D point tracking from monocular video, which integrates video depth, camera ego motion, and object motion into a fully differentiable process for scalable joint training [7][37]. Group 1: Current Issues in 3D Point Tracking - 3D point tracking aims to recover long-term 3D trajectories of arbitrary points from monocular video, showing strong potential in various applications such as robotics and video generation [4]. - Existing solutions heavily rely on low/mid-level visual models, leading to high computational costs and limitations in scalability due to the need for real 3D trajectories as supervision [6][10]. Group 2: Proposed Solution - SpatialTrackerV2 - SpatialTrackerV2 decomposes 3D point tracking into three independent components: video depth, camera ego motion, and object motion, integrating them into a fully differentiable framework [7]. - The architecture includes a front-end for video depth estimation and camera pose initialization, and a back-end for joint motion optimization, utilizing a novel SyncFormer module to model correlations between 2D and 3D features [7][30]. Group 3: Performance Evaluation - The method achieved new state-of-the-art results on the TAPVid-3D benchmark, with scores of 21.2 AJ and 31.0 APD3D, representing improvements of 61.8% and 50.5% over the previous best [9]. - SpatialTrackerV2 demonstrated superior performance in video depth and camera pose consistency estimation, outperforming existing methods like MegaSAM and achieving approximately 50 times faster inference speed [9]. Group 4: Training and Optimization Process - The training process involves using consistency constraints between static and dynamic points for 3D tracking, allowing for effective optimization even with limited depth information [8][19]. - The model employs a bundle optimization approach to refine depth and camera pose estimates iteratively, incorporating various loss functions to ensure accuracy [24][26]. Group 5: Conclusion - SpatialTrackerV2 represents a significant advancement in 3D point tracking, providing a robust foundation for motion understanding in real-world scenarios and pushing towards "physical intelligence" through the exploration of large-scale visual data [37].