生成式模型

Search documents
Insta360最新全景综述:全景视觉的挑战、方法与未来
机器之心· 2025-10-04 03:38
本文作者团队来自 Insta360 影石研究院及其合作高校。目前,Insta360 正在面向世界模型、多模态大模型、生成式模型等前沿方向招聘实习生与全职算法工程 师,欢迎有志于前沿 AI 研究与落地的同学加入!简历投递邮箱: research@insta360.com 在虚拟现实、自动驾驶、具身智能等新兴应用中,全景视觉正逐渐成为不可或缺的研究方向。相比于常规透视图像(正常平面图像,也是大部分 CV 任务使用的标 准输入),全景图像捕捉的是 360°×180° 的完整球面视域(包含四周、头顶天空与脚下地面),仿佛将站立点周围的整个空间展开成一张"大照片"。正因两者在几 何投影、空间采样与边界连续性上的本质差异,直接把基于透视视觉开发的算法迁移到全景视觉往往失效。 本文基于 300+ 篇论文、覆盖 20+ 代表性任务,首次以 " 透 视 -全景 gap " 为主线,系统梳理了三大 gap、两条核心技术路线与未来方向展望,既帮助研究者 "按任 务选解法",也为工程团队 "按场景落地" 提供清晰坐标。 左侧展示了由全景相机获取的球面影像,经过投影后变成常见的等距矩形投影 (ERP) 全景图像。相比下方的透视图像,虽 ...
两张图就能重构3D空间?清华&NTU利用生成模型解锁空间智能新范式
量子位· 2025-07-09 01:18
Core Viewpoint - LangScene-X introduces a generative framework that enables the construction of generalized 3D language-embedded scenes using only sparse views, significantly reducing the number of required input images compared to traditional methods like NeRF, which typically need over 20 views [2][5]. Group 1: Challenges in 3D Language Scene Generation - The current 3D language scene generation faces three core challenges: the contradiction between dense view dependency and sparse input absence, leading to severe 3D structure artifacts and semantic distortion when using only 2-3 images [5]. - There is a disconnection in cross-modal information and a lack of 3D consistency, as existing models process appearance, geometry, and semantics independently, resulting in semantic misalignment [6]. - High-dimensional compression of language features and the bottleneck in generalization capabilities hinder practical applications, with existing methods showing a significant drop in accuracy when switching scenes [7]. Group 2: Solutions Offered by LangScene-X - LangScene-X employs the TriMap video diffusion model, which allows for unified multimodal generation under sparse input conditions, achieving significant improvements in RGB and normal consistency errors and semantic mask boundary accuracy [8]. - The Language Quantization Compressor (LQC) revolutionizes high-dimensional feature compression, mapping high-dimensional CLIP features to 3D discrete indices with minimal reconstruction error, enhancing cross-scene transferability [9][10]. - The model integrates a progressive training strategy that ensures the seamless generation of RGB images, normal maps, and semantic segmentation maps, thus improving the efficiency of 3D reconstruction processes [14]. Group 3: Spatial Intelligence and Performance Metrics - LangScene-X enhances spatial intelligence by accurately aligning text prompts with 3D scene surfaces, allowing for natural language queries to identify objects within 3D environments [15]. - Empirical results demonstrate that LangScene-X achieves an overall mean accuracy (mAcc) of 80.85% and a mean intersection over union (mIoU) of 50.52% on the LERF-OVS dataset, significantly outperforming existing methods [16]. - The model's capabilities position it as a potential core driver for applications in VR scene construction, human-computer interaction, and foundational technologies for autonomous driving and embodied intelligence [18].
放榜了!ICCV 2025最新汇总(自驾/具身/3D视觉/LLM/CV等)
自动驾驶之心· 2025-06-28 13:34
Core Insights - The article discusses the recent ICCV conference, highlighting the excitement around the release of various works related to autonomous driving and the advancements in the field [2]. Group 1: Autonomous Driving Innovations - DriveArena is introduced as a controllable generative simulation platform aimed at enhancing autonomous driving capabilities [4]. - Epona presents an autoregressive diffusion world model specifically designed for autonomous driving applications [4]. - SynthDrive offers a scalable Real2Sim2Real sensor simulation pipeline for high-fidelity asset generation and driving data synthesis [4]. - StableDepth focuses on scene-consistent and scale-invariant monocular depth estimation, which is crucial for improving perception in autonomous vehicles [4]. - CoopTrack explores end-to-end learning for efficient cooperative sequential perception, enhancing the collaborative capabilities of autonomous systems [4]. Group 2: Image and Vision Technologies - CycleVAR repurposes autoregressive models for unsupervised one-step image translation, which can be beneficial for visual recognition tasks in autonomous driving [5]. - CoST emphasizes efficient collaborative perception from a unified spatiotemporal perspective, which is essential for real-time decision-making in autonomous vehicles [5]. - Hi3DGen generates high-fidelity 3D geometry from images via normal bridging, improving the spatial understanding of environments for autonomous systems [5]. - GS-Occ3D focuses on scaling vision-only occupancy reconstruction for autonomous driving using Gaussian splatting techniques [5]. Group 3: Large Model Applications - ETA introduces a dual approach to self-driving with large models, enhancing the efficiency and effectiveness of autonomous driving systems [5]. - Taming the Untamed discusses graph-based knowledge retrieval and reasoning for multi-layered large models (MLLMs), which can significantly improve the decision-making processes in autonomous driving [7].
ICCV 2025不完全汇总(具身/自驾/3D视觉/LLM/CV等)
具身智能之心· 2025-06-27 09:41
【视频+解析】DriveArena: A Controllable Generative Simulation Platform for Autonomous Driving Boost 3D Reconstruction using Diffusion-based Intrinsic Estimation Epona: Autoregressive Diffusion World Model for Autonomous Driving SynthDrive: Scalable Real2Sim2RealSensor Simulation Pipeline for High-Fidelity Asset Generation and Driving DataSynthesis StableDepth:Scene-Consistent andScale-Invariant Monocular Depth CoopTrack: ExploringEnd-to-End Learning for EfficientCooperative Sequential Perception U-ViLAR: Uncertai ...
苹果憋一年终超同参数 Qwen 2.5?三行代码即可接入 Apple Intelligence,自曝如何做推理
AI前线· 2025-06-10 10:05
整理 | 华卫、核子可乐 在今年的 WWDC 全球开发者大会上,苹果推出新一代专为增强 Apple Intelligence 功能所开发的语 言基座模型。经过优化的最新基座模型可在苹果芯片上高效运行,包括一个约 3B 参数的紧凑型模型 和一个基于服务器的混合专家模型,后者为专门针对私有云量身定制的全新架构。 这两大基座模型,均隶属于苹果为支持用户而打造的生成式模型家族。这些模型改进了工具使用与推 理能力,可以理解图像与文本输入,速度更快、效率更高,而且能够支持 15 种语言及平台中集成的 各种智能功能。 据介绍,苹果通过开发新的模型架构来提高这两个模型的效率。对于设备端模型,将整个模型按 5: 3 的深度比分为两块。块 2 中的所有键值(KV)缓存都直接与块 1 最后一层生成的缓存共享,由此 将键值缓存的内存占用量降低了 38.5%,同时显著改善了首个 token 生成时间(time-to-first- token)。 苹果还引入并行轨道专家混合 (PT-MoE) 设计,为服务器端模型开发出一套新架构。此模型由多 个较小的 Transformer(即「轨道」)组成,它们独立处理各 token,仅在各轨道块的输 ...
一个md文件收获超400 star,这份综述分四大范式全面解析了3D场景生成
机器之心· 2025-06-10 08:41
Core Insights - The article discusses the advancements in 3D scene generation, highlighting a comprehensive survey that categorizes existing methods into four main paradigms: procedural methods, neural network-based 3D representation generation, image-driven generation, and video-driven generation [2][4][7]. Summary by Sections Overview of 3D Scene Generation - A survey titled "3D Scene Generation: A Survey" reviews over 300 representative papers and outlines the rapid growth in the field since 2021, driven by the rise of generative models and new 3D representations [2][4][5]. Four Main Paradigms - The four paradigms provide a clear technical roadmap for 3D scene generation, with performance metrics compared across dimensions such as realism, diversity, viewpoint consistency, semantic consistency, efficiency, controllability, and physical realism [7]. Procedural Generation - Procedural generation methods automatically construct complex 3D environments using predefined rules and constraints, widely applied in gaming and graphics engines. This category can be further divided into neural network-based generation, rule-based generation, constraint optimization, and large language model-assisted generation [8]. Image-based and Video-based Generation - Image-based generation leverages 2D image models to reconstruct 3D structures, while video-based generation treats 3D scenes as sequences of images, integrating spatial modeling with temporal consistency [9]. Challenges in 3D Scene Generation - Despite significant progress, challenges remain in achieving controllable, high-fidelity, and physically realistic 3D modeling. Key issues include uneven generation capabilities, the need for improved 3D representations, high-quality data limitations, and a lack of unified evaluation standards [10][16]. Future Directions - Future advancements should focus on higher fidelity generation, parameter control, holistic scene generation, and integrating physical constraints to ensure structural and semantic consistency. Additionally, supporting interactive scene generation and unifying perception and generation capabilities are crucial for the next generation of 3D modeling systems [12][18].
真有人会爱上ChatGPT?我尝试和AI“交往”一周后发现有些不对劲
Hu Xiu· 2025-05-11 07:02
Group 1 - The article discusses the growing phenomenon of human-AI relationships, highlighting cases where individuals have developed emotional connections with AI, leading to significant life decisions such as divorce and marriage to AI [2][35][41] - It mentions that some users have become so immersed in their interactions with AI that they perceive it as a friend or partner, which raises concerns about the implications for real-life relationships and mental health [6][41][49] - The article emphasizes the need for users to be aware of the potential for dependency on AI, especially for those with underlying psychological issues, and suggests that AI should not replace human interaction [42][57] Group 2 - The text outlines various strategies for users to enhance their interactions with AI, such as customizing prompts and understanding the AI's response patterns to create a more engaging experience [9][31][44] - It highlights the importance of treating AI as a conversational partner rather than just a tool, which can lead to deeper self-reflection and personal insights for users [32][41] - The article also points out the limitations of AI, noting that while it can provide immediate feedback and companionship, it lacks true emotional understanding and memory retention, which can lead to disillusionment [55][56]