神经渲染技术
Search documents
港股异动 | 五一视界(06651)盘中涨超8% 五一视界成为英伟达全球L4智驾仿真合作伙伴
智通财经网· 2026-03-17 03:26
Core Viewpoint - Five One Vision (06651) experienced a significant stock price increase, rising over 8% during trading and currently up 5.43% at HKD 53.35, with a trading volume of HKD 61.5 million [1] Group 1: Company Developments - NVIDIA announced on March 16 at GTC that its Omniverse NuRec has achieved deep integration with Five One Vision's SimOne, utilizing neural rendering technology to address the challenges of non-interactive real-world data collection in the autonomous driving sector [1] - This collaboration aims to accelerate the development of inference-based autonomous driving represented by VLA and world models, empowering global Level 4 (L4) automotive partners [1] - Five One Vision has achieved a market share of 53.5% in China's high-end simulation sector, and this partnership is expected to further solidify its core position in the global Physical AI arena [1]
五一视界盘中涨超8% 五一视界成为英伟达全球L4智驾仿真合作伙伴
Zhi Tong Cai Jing· 2026-03-17 03:26
Core Viewpoint - Five One Vision (06651) experienced a significant stock price increase, rising over 8% during trading and currently up 5.43% at HKD 53.35, with a trading volume of HKD 61.5 million [1] Group 1: Company Developments - NVIDIA announced on March 16 at GTC that its Omniverse NuRec has achieved deep integration with Five One Vision's SimOne, utilizing neural rendering technology to address the challenges of non-interactive data collection in the intelligent driving industry [1] - This collaboration aims to accelerate the development of inference-based autonomous driving represented by VLA and world models, empowering global Level 4 (L4) automotive partners [1] - Five One Vision has achieved a market share of 53.5% in the high-end simulation sector in China, and this partnership is expected to further solidify its core position in the global Physical AI arena [1]
英伟达发布全新神经渲染技术DLSS 5
Xin Lang Ke Ji· 2026-03-16 18:47
Core Viewpoint - NVIDIA has launched DLSS 5, marking a significant breakthrough in computer graphics since the introduction of real-time ray tracing technology in 2018 [1] Group 1: Product Features - DLSS 5 introduces a real-time neural rendering model that enhances pixel lighting and material effects, bridging the gap between rendering and reality [1] - The technology allows game developers to create unprecedentedly realistic computer graphics, previously achievable only in Hollywood visual effects [1] Group 2: Company Statements - NVIDIA's CEO Jensen Huang stated that DLSS 5 represents a revolutionary moment in graphics technology, combining handcrafted rendering with generative AI to significantly enhance visual realism while maintaining artistic control for creators [1] Group 3: Release Information - DLSS 5 is set to be released in the fall of this year [1]
OmniRe全新升级!自驾场景重建色彩渲染和几何渲染双SOTA~
自动驾驶之心· 2025-07-27 14:41
Core Insights - The article discusses a novel multi-scale bilateral grid framework that enhances the geometric accuracy and visual realism of dynamic scene reconstruction in autonomous driving, addressing challenges posed by photometric inconsistency in real-world environments [5][10][12]. Motivation - Neural rendering technologies are crucial for the development and testing of autonomous driving systems, but they heavily rely on photometric consistency among multi-view images. Variations in lighting conditions, weather, and camera parameters introduce significant color inconsistencies, leading to erroneous geometry and distorted textures [5][6]. Existing Solutions - Current solutions are categorized into two main types: global appearance coding and bilateral grids. The proposed framework combines the advantages of both methods to overcome their limitations [6][10]. Key Contributions - The framework introduces a multi-scale bilateral grid that seamlessly integrates global appearance coding and local bilateral grids, allowing adaptive color correction from coarse to fine scales. This significantly improves the geometric accuracy of dynamic driving scene reconstruction and effectively suppresses artifacts like "floaters" [9][10][12]. Method Overview 1. **Scene Representation and Initial Rendering**: The framework employs Gaussian splatting to model complex driving scenes, creating a hybrid scene graph that includes independently modeled elements like sky, static backgrounds, and dynamic objects [12]. 2. **Multi-Scale Bilateral Grid Correction**: The initial rendered image undergoes processing through a hierarchical multi-scale bilateral grid, resulting in a color-consistent, visually realistic high-quality image [13][14]. 3. **Optimization Strategy and Real-World Adaptability**: The model utilizes a coarse-to-fine optimization strategy and a composite loss function to ensure stable training and effective adaptation to real-world variations in image signal processing parameters [15][16]. Experimental Results - The proposed framework was extensively evaluated on four major autonomous driving datasets: Waymo, NuScenes, Argoverse, and PandaSet. The results demonstrate significant improvements in both geometric accuracy and visual realism compared to baseline models [17][18]. Quantitative Evaluation - The method achieved leading results in both geometric and appearance metrics. For instance, the Chamfer Distance (CD) metric on the Waymo dataset improved from 1.378 (baseline) to 0.989, showcasing the model's ability to handle color inconsistencies effectively [18][19]. Qualitative Evaluation - Visual comparisons illustrate the robustness of the proposed method in complex real-world scenarios, effectively reducing visual artifacts and maintaining high-quality outputs [23][24][29]. Generalizability and Plug-and-Play Capability - The method's core modules were integrated into advanced baseline models like ChatSim and StreetGS, resulting in substantial performance enhancements, such as an increase in reconstruction PSNR from 25.74 to 27.90 [20][21]. Conclusion - The multi-scale bilateral grid framework represents a significant advancement in the field of autonomous driving, providing a robust solution to the challenges of dynamic scene reconstruction and photometric inconsistency, thereby enhancing the overall safety and reliability of autonomous systems [10][12][18].