3D Gaussians

Search documents
一个md文件收获超400 star,这份综述分四大范式全面解析了3D场景生成
机器之心· 2025-06-10 08:41
Core Insights - The article discusses the advancements in 3D scene generation, highlighting a comprehensive survey that categorizes existing methods into four main paradigms: procedural methods, neural network-based 3D representation generation, image-driven generation, and video-driven generation [2][4][7]. Summary by Sections Overview of 3D Scene Generation - A survey titled "3D Scene Generation: A Survey" reviews over 300 representative papers and outlines the rapid growth in the field since 2021, driven by the rise of generative models and new 3D representations [2][4][5]. Four Main Paradigms - The four paradigms provide a clear technical roadmap for 3D scene generation, with performance metrics compared across dimensions such as realism, diversity, viewpoint consistency, semantic consistency, efficiency, controllability, and physical realism [7]. Procedural Generation - Procedural generation methods automatically construct complex 3D environments using predefined rules and constraints, widely applied in gaming and graphics engines. This category can be further divided into neural network-based generation, rule-based generation, constraint optimization, and large language model-assisted generation [8]. Image-based and Video-based Generation - Image-based generation leverages 2D image models to reconstruct 3D structures, while video-based generation treats 3D scenes as sequences of images, integrating spatial modeling with temporal consistency [9]. Challenges in 3D Scene Generation - Despite significant progress, challenges remain in achieving controllable, high-fidelity, and physically realistic 3D modeling. Key issues include uneven generation capabilities, the need for improved 3D representations, high-quality data limitations, and a lack of unified evaluation standards [10][16]. Future Directions - Future advancements should focus on higher fidelity generation, parameter control, holistic scene generation, and integrating physical constraints to ensure structural and semantic consistency. Additionally, supporting interactive scene generation and unifying perception and generation capabilities are crucial for the next generation of 3D modeling systems [12][18].