3D重建
Search documents
OpenClaw带火AI记忆,DeepMind用混合记忆把3D重建拉到近2万帧
机器之心· 2026-03-15 01:20
Core Insights - The article discusses the rapid rise of the private assistant OpenClaw, which has gained popularity due to its long-term memory capabilities, allowing it to remember user interactions and preferences [1] - OpenClaw's memory mechanism is crucial for handling complex tasks in various applications, including chat dialogues and 3D reconstruction [1] Group 1: Memory Mechanism and 3D Reconstruction - The memory mechanism is essential for maintaining long-term context in tasks such as chat dialogues and automated workflows [1] - Existing feedforward 3D reconstruction models struggle with long sequences due to reliance on short-term context windows, limiting their ability to model dependencies effectively [2] - The introduction of geometric foundational models like DUSt3R and MonST3R allows for robust feedforward inference even in challenging scenarios [1][2] Group 2: Challenges and Innovations - Two main barriers exist: inherent context barriers in current architectures and significant data barriers during training [2] - Google DeepMind and UC Berkeley proposed LoGeR (Long-Context Geometric Reconstruction) to address these challenges, enabling dense 3D reconstruction over long sequences without post-optimization [2][4] - LoGeR utilizes a hybrid memory module to maintain global consistency and high precision across block boundaries [2][4] Group 3: Performance and Evaluation - LoGeR was trained on sequences of 128 frames and generalized to thousands of frames, outperforming previous feedforward methods by reducing absolute trajectory error (ATE) by over 74% on the KITTI dataset [4] - In quantitative results, LoGeR surpassed existing feedforward methods and even outperformed the strongest optimization-based method, VGGT-Long, by 32.5% [24] - LoGeR demonstrated stable performance in both long and short sequence evaluations, maintaining global scale consistency across sequences of up to 20,000 frames [25][30]
独家对话Remy创始人王正男:一个超过微信下载量的爆款应用是如何诞生的
虎嗅APP· 2026-02-14 09:18
Core Insights - The article discusses the rapid growth of KIRI's app Remy, which gained over 1 million users in just 9 days after its launch, highlighting the increasing demand for 3D reconstruction technology in the consumer market [6][12]. - KIRI's transition from hardware to software development is emphasized, showcasing the company's resilience and adaptability in the face of previous failures in hardware ventures [15][16]. - The introduction of the "3D Gaussian Splash" technology significantly lowered the barriers for 3D modeling, leading to a threefold increase in user conversion rates for KIRI Engine, indicating a shift in user needs towards casual 3D content creation [10][28]. Group 1: User Growth and Market Demand - Remy processed tasks at a rate of 200 every 10 minutes, requiring approximately 800 GPUs continuously for 3D reconstruction tasks [5]. - The app's user base surpassed 1 million within 9 days, outperforming competitors like Sora 2, which took 5 days to reach the same milestone [6]. - A viral marketing strategy, including a 3D video that garnered over 1 billion views on Douyin, contributed to the app's explosive growth [7]. Group 2: Technology and Product Development - The "3D Gaussian Splash" technology, while not suitable for professional applications, has proven effective for casual users, leading to increased engagement and usage [10][28]. - KIRI Engine's focus on professional users in fields like 3D printing, gaming, and film has differentiated it from competitors targeting casual users [20][21]. - The decision to maintain KIRI Engine's professional focus while launching Remy as a consumer-oriented product reflects a strategic approach to market segmentation [32]. Group 3: Business Model and Future Goals - KIRI aims to achieve a 30% user retention rate within 30 days, with a target of 500,000 daily active users by 2026, aligning its growth strategy with industry benchmarks [51][56]. - The company plans to monetize Remy through various channels, including advertising, professional 3D reconstruction services, and custom special effect videos [52]. - The partnership with a domestic startup for elastic computing solutions has allowed KIRI to manage the significant computational demands of Remy without incurring prohibitive costs [43].
分割一切并不够,还要3D重建一切,SAM 3D来了
具身智能之心· 2025-11-21 00:04
Core Viewpoint - Meta has launched significant updates with the introduction of SAM 3D and SAM 3, enhancing the understanding of images in 3D and providing advanced capabilities for object detection, segmentation, and tracking in images and videos [2][6][40]. Group 1: SAM 3D Overview - SAM 3D is the latest addition to the SAM series, featuring two models: SAM 3D Objects and SAM 3D Body, both demonstrating state-of-the-art performance in converting 2D images into detailed 3D reconstructions [2][4]. - SAM 3D Objects allows users to generate 3D models from a single image, overcoming limitations of traditional 3D modeling that often relies on isolated or synthetic data [11][15]. - Meta has annotated nearly 1 million real-world images, generating approximately 3.14 million 3D meshes, utilizing a scalable data engine to enhance the quality and quantity of 3D data [20][26]. Group 2: SAM 3D Body - SAM 3D Body focuses on accurate 3D human pose and shape reconstruction from single images, maintaining high-quality performance even in complex scenarios with occlusions and unusual poses [28][30]. - The model is interactive, allowing users to guide and control predictions, enhancing accuracy and usability [29]. - A high-quality training dataset of around 8 million images was created to improve the model's performance across various 3D benchmarks [33]. Group 3: SAM 3 Capabilities - SAM 3 introduces promptable concept segmentation, enabling the model to detect and segment specific concepts based on text or example image prompts, significantly improving its performance in concept recognition [40][42]. - The architecture of SAM 3 builds on previous advancements, utilizing components like the Meta Perception Encoder and DETR for enhanced image recognition and object detection capabilities [42][44]. - SAM 3 achieves a twofold increase in cgF1 scores for concept recognition and maintains near real-time performance for images with over 100 detection targets, completing inference in approximately 30 milliseconds on H200 GPUs [44].
谢赛宁盛赞字节Seed新研究!单Transformer搞定任意视图3D重建
量子位· 2025-11-18 05:02
Core Insights - The article discusses the latest research achievement by ByteDance's Seed team, introducing Depth Anything 3 (DA3), which has received high praise from experts like Xie Saining [1] - DA3 simplifies the process of 3D reconstruction by using a single visual transformer to accurately estimate depth and reconstruct camera positions from various input formats, including single images, multi-view photos, and videos [2][7] Performance Improvements - DA3 has shown significant performance enhancements, with an average increase of 35.7% in camera localization accuracy and a 23.6% improvement in geometric reconstruction accuracy compared to previous models [3] - The model surpasses its predecessor, DA2, in monocular depth estimation [3] Architectural Design - DA3's architecture is designed to be simple yet effective, utilizing a single visual transformer and focusing on two core predictions: depth and light [7] - The model's workflow consists of four main stages, starting with input processing where multi-view images are transformed into feature blocks, integrating camera parameters when available [9] - The core of the model is the Single Transformer (Vanilla DINO), which employs both within-view and cross-view self-attention mechanisms to facilitate perspective transitions across different input formats [9] Training Methodology - DA3 employs a teacher-student distillation strategy, where a more powerful teacher model generates high-quality pseudo-labels from vast datasets, guiding the student model (DA3) during training [13] - This approach allows for the effective use of diverse data while reducing reliance on high-precision annotated data, enabling the model to cover a broader range of scenarios during training [14] Evaluation and Applications - DA3 demonstrates robust performance, accurately estimating camera parameters for each frame in a video and reconstructing camera motion trajectories [16] - The depth maps produced by DA3, when combined with camera positions, yield higher density and lower noise 3D point clouds, significantly improving quality compared to traditional methods [17] - The model can also generate images from unshot angles through perspective completion, showcasing potential applications in virtual tourism and digital twins [19] Team Background - The Depth Anything 3 project is led by Kang Bingyi, a post-95 researcher at ByteDance, with a focus on computer vision and multimodal models [20] - Kang completed his undergraduate studies at Zhejiang University in 2016 and pursued a master's and PhD in artificial intelligence at UC Berkeley and the National University of Singapore [23] - He has previously interned at Facebook AI Research and has collaborated with notable figures in the field [24]
首个实例理解3D重建模型,NTU&阶越提出基于实例解耦的3D重建模型,助理场景理解
3 6 Ke· 2025-10-31 08:28
Core Insights - The article discusses the challenges AI faces in perceiving 3D geometry and semantic content, highlighting the limitations of traditional methods that separate 3D reconstruction from spatial understanding. A new approach, IGGT (Instance-Grounded Geometry Transformer), integrates these aspects into a unified model for improved performance in various tasks [1]. Group 1: IGGT Model Development - IGGT is an end-to-end large unified Transformer that combines spatial reconstruction and instance-level contextual understanding in a single model [1]. - The model is built on a new large-scale dataset, InsScene-15K, which includes 15,000 scenes and 200 million images, featuring high-quality, 3D-consistent instance-level masks [2]. - IGGT introduces the "Instance-Grounded Scene Understanding" paradigm, allowing it to operate independently of specific visual language models (VLMs) and enabling seamless integration with various VLMs and large multimodal models (LMMs) [3]. Group 2: Applications and Capabilities - The unified representation of IGGT significantly expands its downstream capabilities, supporting spatial tracking, open vocabulary segmentation, and scene question answering (QA) [4]. - The model's architecture includes a Geometry Head for predicting camera parameters and depth maps, and an Instance Head for decoding instance features, enhancing spatial perception [11][18]. - IGGT achieves high performance in instance 3D tracking tasks, with tracking IOU and success rates reaching 70% and 90%, respectively, making it the only model capable of successfully tracking objects that disappear and reappear [16]. Group 3: Data Collection and Processing - The InsScene-15K dataset is constructed through a novel data management process that integrates three different data sources, including synthetic data, real-world video capture, and RGBD capture [6][9][10]. - The synthetic data is generated in simulated environments, providing perfect accuracy for segmentation masks, while real-world data is processed through a custom pipeline to ensure temporal consistency [8][9]. Group 4: Performance Comparison - IGGT outperforms existing models in reconstruction, understanding, and tracking tasks, with significant improvements in understanding and tracking metrics compared to other models [16]. - The model's instance masks can serve as prompts for VLMs, enabling open vocabulary semantic segmentation and facilitating complex object-centric question answering tasks [19][24].
首个实例理解3D重建模型!NTU&阶越提出基于实例解耦的3D重建模型,助理场景理解
量子位· 2025-10-31 04:09
Core Insights - The article discusses the challenges AI faces in simultaneously understanding the geometric structure and semantic content of 3D worlds, which humans naturally perceive. Traditional methods separate 3D reconstruction from spatial understanding, leading to errors and limited generalization. The introduction of IGGT (Instance-Grounded Geometry Transformer) aims to unify these processes in a single model [1][2]. Group 1: IGGT Framework - IGGT is an end-to-end unified framework that integrates spatial reconstruction and instance-level contextual understanding within a single model [2]. - A new large-scale dataset, InsScene-15K, has been created, containing 15,000 scenes and 200 million images, with high-quality, 3D-consistent instance-level masks [2][5]. - The model introduces the "Instance-Grounded Scene Understanding" paradigm, allowing it to generate instance masks that can seamlessly integrate with various Vision Language Models (VLMs) and Language Models (LMMs) [2][18]. Group 2: Data Collection Process - The InsScene-15K dataset is constructed through a novel data management process driven by SAM2, integrating three different data sources [5]. - Synthetic data is generated in simulated environments, providing perfect accuracy for RGB images, depth maps, camera poses, and object-level segmentation masks [8]. - Real-world video collection involves a custom SAM2 pipeline that generates dense initial mask proposals and propagates these masks over time, ensuring high temporal consistency [9]. - Real-world RGBD data collection uses a mask optimization process to enhance the quality of 2D masks while maintaining 3D ID consistency [10]. Group 3: Model Architecture - The IGGT model architecture consists of a unified transformer that processes image tokens through attention modules to create a powerful unified token representation [14]. - It features dual decoding heads for geometry and instance predictions, employing a cross-modal fusion block to enhance spatial perception [17]. - The model utilizes a multi-view contrastive loss to learn 3D-consistent instance features from 2D inputs [15]. Group 4: Performance and Applications - IGGT is the first model capable of simultaneously performing reconstruction, understanding, and tracking tasks, showing significant improvements in understanding and tracking metrics [18]. - In instance 3D tracking tasks, IGGT achieves tracking IOU and success rates of 70% and 90%, respectively, being the only model capable of tracking objects that disappear and reappear [19]. - The model supports multiple applications, including instance spatial tracking, open-vocabulary semantic segmentation, and QA scene grounding, allowing for complex object-centric queries in 3D scenes [23][30].
腾讯开源混元世界模型1.1,视频秒变3D世界,单卡推理仅需1秒
量子位· 2025-10-22 09:12
Core Viewpoint - Tencent has released and open-sourced the Hunyuan World Model 1.1, a unified end-to-end 3D reconstruction model that supports generating 3D worlds from multiple views or videos with high precision and efficiency [1][3][16]. Group 1: Model Features - Hunyuan World Model 1.1 is the industry's first unified feedforward 3D reconstruction model, capable of handling various input modalities and producing multiple outputs simultaneously, achieving state-of-the-art (SOTA) performance [4][18][21]. - The model supports flexible input handling, allowing the integration of camera poses, intrinsic parameters, and depth maps to enhance reconstruction quality [18][20]. - It features a single-card deployment with one-second inference time, significantly faster than traditional methods that may take minutes or hours [22][24]. Group 2: Performance Comparison - In comparisons with Meta's MapAnything and AnySplat models, Hunyuan World Model 1.1 demonstrated superior surface smoothness and scene regularity in 3D point cloud reconstruction tasks [11][12][14]. - The model excels in both geometric accuracy and detail restoration, providing more stable and realistic scene reconstructions compared to its competitors [14][15]. Group 3: User Accessibility - The model is fully open-sourced, allowing developers to clone it from GitHub and deploy it locally, while ordinary users can access it online to generate 3D scenes from uploaded images or videos [34][37]. - The technology aims to democratize 3D reconstruction, making it accessible for anyone to create professional-level 3D scenes in seconds [37].
哈工大&理想PAGS:自驾闭环仿真新SOTA!
自动驾驶之心· 2025-10-17 16:04
Core Viewpoint - The article discusses the advancements in 3D scene reconstruction for dynamic urban environments, emphasizing the introduction of the PAGS method, which addresses the inefficiencies in resource allocation by prioritizing semantic elements critical for driving safety [1][22]. Research Background and Core Issues - Dynamic large-scale urban environment 3D reconstruction is essential for autonomous driving systems, supporting simulation testing and digital twin applications [1]. - Existing methods face a bottleneck in resource allocation, failing to distinguish between critical elements (e.g., pedestrians, vehicles) and non-critical elements (e.g., distant buildings) [1]. - This leads to wasted computational resources on non-critical details while compromising the fidelity of critical object details [1]. Core Method Design - PAGS introduces a task-aware semantic priority embedded in the reconstruction and rendering process, consisting of three main modules: 1. Combination of Gaussian scene representation [4]. 2. Semantic-guided pruning [5]. 3. Priority-driven rendering pipeline [6]. Experimental Validation and Results Analysis - Experiments were conducted on the Waymo and KITTI datasets, measuring reconstruction fidelity and efficiency against mainstream methods [12]. - Quantitative results show that PAGS achieves a PSNR of 34.63 and an FPS of 353, significantly outperforming other methods in both fidelity and speed [17][22]. - The model size is 530 MB with a VRAM usage of 6.1 GB, making it suitable for in-vehicle hardware [17]. Conclusion - PAGS effectively breaks the inherent trade-off between fidelity and efficiency in dynamic driving scene 3D reconstruction through semantic-guided resource allocation and priority-driven rendering acceleration [22]. - The method ensures computational resources are focused on critical objects, enhancing rendering speed while maintaining high fidelity [23].
自动驾驶基础模型应该以能力为导向,而不仅是局限于方法本身
自动驾驶之心· 2025-09-16 23:33
Core Insights - The article discusses the transformative impact of foundational models on the autonomous driving perception domain, shifting from task-specific deep learning models to versatile architectures trained on vast and diverse datasets [2][4] - It introduces a new classification framework focusing on four core capabilities essential for robust performance in dynamic driving environments: general knowledge, spatial understanding, multi-sensor robustness, and temporal reasoning [2][5] Group 1: Introduction and Background - Autonomous driving perception is crucial for enabling vehicles to interpret their surroundings in real-time, involving key tasks such as object detection, semantic segmentation, and tracking [3] - Traditional models, designed for specific tasks, exhibit limited scalability and poor generalization, particularly in "long-tail scenarios" where rare but critical events occur [3][4] Group 2: Foundational Models - Foundational models, developed through self-supervised or unsupervised learning strategies, leverage large-scale datasets to learn general representations applicable across various downstream tasks [4][5] - These models demonstrate significant advantages in autonomous driving due to their inherent generalization capabilities, efficient transfer learning, and reduced reliance on labeled datasets [4][5] Group 3: Key Capabilities - The four key dimensions for designing foundational models tailored for autonomous driving perception are: 1. General Knowledge: Ability to adapt to a wide range of driving scenarios, including rare situations [5][6] 2. Spatial Understanding: Deep comprehension of 3D spatial structures and relationships [5][6] 3. Multi-Sensor Robustness: Maintaining high performance under varying environmental conditions and sensor failures [5][6] 4. Temporal Reasoning: Capturing temporal dependencies and predicting future states of the environment [6] Group 4: Integration and Challenges - The article outlines three mechanisms for integrating foundational models into autonomous driving technology stacks: feature-level distillation, pseudo-label supervision, and direct integration [37][40] - It highlights the challenges faced in deploying these models, including the need for effective domain adaptation, addressing hallucination risks, and ensuring efficiency in real-time applications [58][61] Group 5: Future Directions - The article emphasizes the importance of advancing research in foundational models to enhance their safety and effectiveness in autonomous driving systems, addressing current limitations and exploring new methodologies [2][5][58]
ICCV 2025 | RobustSplat: 解耦致密化与动态的抗瞬态3DGS三维重建
机器之心· 2025-08-19 09:45
Core Viewpoint - The article discusses the RobustSplat method, which addresses the challenges of 3D Gaussian Splatting (3DGS) in rendering dynamic objects by introducing a delayed Gaussian growth strategy and a scale-cascade mask guidance method to reduce rendering artifacts caused by transient objects [2][21]. Research Motivation - The motivation stems from understanding the dual role of Gaussian densification in 3DGS, which enhances scene detail but also risks overfitting dynamic areas, leading to artifacts and scene distortion. The goal is to balance static structure representation and dynamic interference suppression [6][8]. Methodology - **Transient Mask Estimation**: Utilizes a Mask MLP with two linear layers to output pixel-wise transient masks, distinguishing between transient and static regions [9]. - **Feature Selection**: DINOv2 features are chosen for their balance of semantic consistency, noise resistance, and computational efficiency, outperforming other feature sets like Stable Diffusion and SAM [10]. - **Supervision Design**: Combines image residual loss and feature cosine similarity loss for mask MLP optimization, enhancing dynamic area recognition [12]. - **Delayed Gaussian Growth Strategy**: This core strategy postpones the densification process to prioritize static scene structure optimization, reducing the risk of misclassifying static areas as transient [13]. - **Scale-Cascade Mask Guidance**: Initially estimates transient masks using low-resolution features, then transitions to high-resolution supervision for more accurate mask predictions [14]. Experimental Results - Experiments on NeRF On-the-go and RobustNeRF datasets show that RobustSplat outperforms baseline methods like 3DGS, SpotLessSplats, and WildGaussians across various metrics, including PSNR, SSIM, and LPIPS [16][21]. Summary - RobustSplat effectively reduces rendering artifacts caused by transient objects through its innovative strategies, demonstrating superior performance in complex scenes with dynamic elements while preserving detail [19][21].