Core Insights - The article discusses the launch of RTFM (Real-Time Frame Model), a generative world model that can run on a single H100 GPU, enabling real-time, consistent 3D world generation from 2D images [2][3][10]. Group 1: RTFM Overview - RTFM generates new 2D images from one or more 2D inputs without explicitly constructing a 3D representation, functioning as a learning-based renderer [5][17]. - The model is trained on large-scale video data and learns to model 3D geometry, reflections, and shadows through observation [5][17]. - RTFM blurs the line between reconstruction and generation, handling both tasks simultaneously based on the number of input views [20]. Group 2: Technical Requirements - Generative world models like RTFM require significant computational power, with the need to output over 100,000 tokens per second for interactive 4K video streams [11]. - To maintain consistency in interactions lasting over an hour, the model must process over 100 million tokens of context [12]. - Current computational infrastructure makes such demands economically unfeasible, but RTFM is designed to be efficient enough to run on existing hardware [13][15]. Group 3: Scalability and Persistence - RTFM is designed to be scalable, allowing it to benefit from future reductions in computational costs [14]. - The model addresses the challenge of persistence in generated worlds by modeling the spatial pose of each frame, enabling it to remember and reconstruct scenes over time [23][24]. - Context juggling mechanisms allow RTFM to maintain geometric structure in large scenes while ensuring true world persistence [25].
单块GPU上跑出实时3D宇宙,李飞飞世界模型新成果震撼问世
机器之心·2025-10-17 02:11